Jan 22 10:36:46 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 10:36:46 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:46 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:36:47 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 10:36:47 crc kubenswrapper[4975]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:36:47 crc kubenswrapper[4975]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 10:36:47 crc kubenswrapper[4975]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:36:47 crc kubenswrapper[4975]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:36:47 crc kubenswrapper[4975]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 10:36:47 crc kubenswrapper[4975]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.525159 4975 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532526 4975 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532569 4975 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532583 4975 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532592 4975 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532603 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532613 4975 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532623 4975 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532633 4975 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532642 4975 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532650 4975 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532659 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532668 4975 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532677 4975 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532686 4975 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532694 4975 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532702 4975 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532710 4975 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532719 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532727 4975 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532735 4975 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532743 4975 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532752 4975 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532760 4975 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532769 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532778 4975 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532786 4975 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532794 4975 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532803 4975 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532811 4975 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532819 4975 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532829 4975 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532838 4975 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532849 4975 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532860 4975 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532869 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532887 4975 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532899 4975 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532911 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532921 4975 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532933 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532944 4975 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532957 4975 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532966 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532978 4975 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532988 4975 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.532997 4975 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533006 4975 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533014 4975 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533023 4975 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533031 4975 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533040 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533049 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533058 4975 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533066 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533075 4975 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533084 4975 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533092 4975 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533100 4975 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533108 4975 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533117 4975 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533126 4975 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533134 4975 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533142 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533150 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533158 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533169 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533177 4975 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533188 4975 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533197 4975 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533206 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.533215 4975 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533434 4975 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533455 4975 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533480 4975 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533493 4975 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533506 4975 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533516 4975 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533529 4975 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533541 4975 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533551 4975 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533561 4975 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533572 4975 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533582 4975 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533592 4975 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533602 4975 flags.go:64] FLAG: --cgroup-root="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533612 4975 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533622 4975 flags.go:64] FLAG: --client-ca-file="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533631 4975 flags.go:64] FLAG: --cloud-config="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533641 4975 flags.go:64] FLAG: --cloud-provider="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533650 4975 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533667 4975 flags.go:64] FLAG: --cluster-domain="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533677 4975 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533687 4975 flags.go:64] FLAG: --config-dir="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533699 4975 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533710 4975 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533723 4975 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533733 4975 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533745 4975 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533755 4975 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533767 4975 flags.go:64] FLAG: --contention-profiling="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533777 4975 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533787 4975 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533797 4975 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533809 4975 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533821 4975 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533831 4975 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533841 4975 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533851 4975 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533861 4975 flags.go:64] FLAG: --enable-server="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533870 4975 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533892 4975 flags.go:64] FLAG: --event-burst="100" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533902 4975 flags.go:64] FLAG: --event-qps="50" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533912 4975 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533922 4975 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533932 4975 flags.go:64] FLAG: --eviction-hard="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533952 4975 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533962 4975 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533972 4975 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.533997 4975 flags.go:64] FLAG: --eviction-soft="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534007 4975 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534017 4975 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534027 4975 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534038 4975 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534048 4975 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534058 4975 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534068 4975 flags.go:64] FLAG: --feature-gates="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534080 4975 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534090 4975 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534101 4975 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534111 4975 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534122 4975 flags.go:64] FLAG: --healthz-port="10248" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534133 4975 flags.go:64] FLAG: --help="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534144 4975 flags.go:64] FLAG: --hostname-override="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534154 4975 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534164 4975 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534174 4975 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534184 4975 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534193 4975 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534204 4975 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534213 4975 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534222 4975 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534232 4975 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534242 4975 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534253 4975 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534262 4975 flags.go:64] FLAG: --kube-reserved="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534274 4975 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534285 4975 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534298 4975 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534311 4975 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534323 4975 flags.go:64] FLAG: --lock-file="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534369 4975 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534383 4975 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534396 4975 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534417 4975 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534441 4975 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534451 4975 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534461 4975 flags.go:64] FLAG: --logging-format="text" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534471 4975 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534481 4975 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534491 4975 flags.go:64] FLAG: --manifest-url="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534501 4975 flags.go:64] FLAG: --manifest-url-header="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534515 4975 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534525 4975 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534537 4975 flags.go:64] FLAG: --max-pods="110" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534548 4975 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534559 4975 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534569 4975 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534579 4975 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534589 4975 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534600 4975 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534610 4975 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534631 4975 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534642 4975 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534652 4975 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534662 4975 flags.go:64] FLAG: --pod-cidr="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534672 4975 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534686 4975 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534695 4975 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534705 4975 flags.go:64] FLAG: --pods-per-core="0" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534715 4975 flags.go:64] FLAG: --port="10250" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534725 4975 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534735 4975 flags.go:64] FLAG: --provider-id="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534745 4975 flags.go:64] FLAG: --qos-reserved="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534754 4975 flags.go:64] FLAG: --read-only-port="10255" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534765 4975 flags.go:64] FLAG: --register-node="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534774 4975 flags.go:64] FLAG: --register-schedulable="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534784 4975 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534800 4975 flags.go:64] FLAG: --registry-burst="10" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534810 4975 flags.go:64] FLAG: --registry-qps="5" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534820 4975 flags.go:64] FLAG: --reserved-cpus="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534848 4975 flags.go:64] FLAG: --reserved-memory="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534861 4975 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534871 4975 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534881 4975 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534891 4975 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534903 4975 flags.go:64] FLAG: --runonce="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534913 4975 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534923 4975 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534933 4975 flags.go:64] FLAG: --seccomp-default="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534974 4975 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534986 4975 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.534996 4975 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535006 4975 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535016 4975 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535026 4975 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535036 4975 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535045 4975 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535055 4975 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535065 4975 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535075 4975 flags.go:64] FLAG: --system-cgroups="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535085 4975 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535102 4975 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535111 4975 flags.go:64] FLAG: --tls-cert-file="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535121 4975 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535142 4975 flags.go:64] FLAG: --tls-min-version="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535152 4975 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535162 4975 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535171 4975 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535181 4975 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535192 4975 flags.go:64] FLAG: --v="2" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535205 4975 flags.go:64] FLAG: --version="false" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535219 4975 flags.go:64] FLAG: --vmodule="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535230 4975 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.535241 4975 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535520 4975 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535534 4975 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535558 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535568 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535577 4975 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535587 4975 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535596 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535605 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535616 4975 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535625 4975 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535633 4975 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535642 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535655 4975 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535665 4975 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535703 4975 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535715 4975 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535725 4975 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535735 4975 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535745 4975 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535754 4975 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535762 4975 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535771 4975 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535781 4975 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535789 4975 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535798 4975 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535806 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535814 4975 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535822 4975 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535831 4975 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535842 4975 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535858 4975 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535867 4975 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535876 4975 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535885 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535893 4975 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535902 4975 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535910 4975 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535918 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535939 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535949 4975 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535958 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535966 4975 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535975 4975 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535983 4975 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.535993 4975 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536002 4975 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536011 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536019 4975 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536028 4975 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536036 4975 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536045 4975 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536053 4975 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536061 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536070 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536079 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536090 4975 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536101 4975 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536112 4975 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536122 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536133 4975 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536142 4975 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536151 4975 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536163 4975 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536172 4975 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536180 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536189 4975 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536198 4975 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536207 4975 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536215 4975 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536224 4975 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.536233 4975 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.536246 4975 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.588742 4975 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.588797 4975 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588925 4975 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588947 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588956 4975 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588965 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588976 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588986 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.588997 4975 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589007 4975 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589017 4975 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589027 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589037 4975 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589045 4975 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589057 4975 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589068 4975 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589076 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589084 4975 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589094 4975 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589103 4975 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589111 4975 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589118 4975 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589126 4975 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589135 4975 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589144 4975 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589153 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589161 4975 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589170 4975 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589179 4975 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589188 4975 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589195 4975 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589203 4975 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589212 4975 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589222 4975 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589232 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589240 4975 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589260 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589270 4975 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589280 4975 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589289 4975 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589297 4975 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589305 4975 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589313 4975 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589321 4975 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589355 4975 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589364 4975 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589371 4975 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589379 4975 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589386 4975 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589394 4975 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589402 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589410 4975 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589418 4975 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589425 4975 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589433 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589442 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589449 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589457 4975 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589465 4975 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589472 4975 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589480 4975 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589488 4975 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589495 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589503 4975 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589510 4975 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589518 4975 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589526 4975 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589533 4975 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589542 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589552 4975 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589561 4975 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589569 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589579 4975 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.589592 4975 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589814 4975 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589830 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589839 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589847 4975 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589855 4975 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589862 4975 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589870 4975 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589878 4975 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589886 4975 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589893 4975 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589901 4975 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589909 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589917 4975 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589925 4975 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589934 4975 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589941 4975 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589949 4975 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589957 4975 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589965 4975 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589973 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589984 4975 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.589994 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590003 4975 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590013 4975 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590024 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590034 4975 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590041 4975 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590049 4975 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590057 4975 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590065 4975 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590073 4975 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590081 4975 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590089 4975 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590097 4975 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590106 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590115 4975 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590123 4975 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590130 4975 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590138 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590146 4975 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590154 4975 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590161 4975 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590169 4975 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590176 4975 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590186 4975 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590194 4975 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590202 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590212 4975 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590223 4975 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590232 4975 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590240 4975 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590248 4975 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590256 4975 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590263 4975 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590271 4975 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590279 4975 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590287 4975 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590295 4975 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590302 4975 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590310 4975 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590320 4975 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590364 4975 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590376 4975 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590386 4975 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590396 4975 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590404 4975 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590414 4975 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590425 4975 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590435 4975 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590445 4975 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.590456 4975 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.590469 4975 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.590739 4975 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.595416 4975 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.595542 4975 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.596407 4975 server.go:997] "Starting client certificate rotation" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.596448 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.596651 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 16:19:17.212472199 +0000 UTC Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.596780 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.606033 4975 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.607633 4975 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.608640 4975 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.616208 4975 log.go:25] "Validated CRI v1 runtime API" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.639060 4975 log.go:25] "Validated CRI v1 image API" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.640627 4975 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.642881 4975 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-10-31-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.642965 4975 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.671632 4975 manager.go:217] Machine: {Timestamp:2026-01-22 10:36:47.669385938 +0000 UTC m=+0.327422118 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:355792bf-2b64-4c58-a9ae-ab0c7ede715b BootID:b2a3c802-a546-4128-b88a-6c9d41dab7dc Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:21:97:5c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:21:97:5c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:da:47:7b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:88:95:c5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:14:cb:e8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e8:cf:1c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:4a:20:4e:95:b4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:7a:1c:d1:b3:60 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.672020 4975 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.672211 4975 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.673369 4975 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.673692 4975 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.673753 4975 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.674079 4975 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.674098 4975 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.674476 4975 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.674530 4975 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.674839 4975 state_mem.go:36] "Initialized new in-memory state store" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.674969 4975 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.675979 4975 kubelet.go:418] "Attempting to sync node with API server" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.676013 4975 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.676052 4975 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.676073 4975 kubelet.go:324] "Adding apiserver pod source" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.676091 4975 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.678266 4975 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.678483 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.678539 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.678604 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.678709 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.678823 4975 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.680409 4975 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681299 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681374 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681391 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681405 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681426 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681440 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681454 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681477 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681493 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681508 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681526 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.681540 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.682596 4975 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.683263 4975 server.go:1280] "Started kubelet" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.683746 4975 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.683859 4975 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.683921 4975 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.684468 4975 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.685079 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.685125 4975 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 10:36:47 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.688124 4975 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.688489 4975 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.688575 4975 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.689097 4975 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d0741c4d7b83d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:36:47.683213373 +0000 UTC m=+0.341249553,LastTimestamp:2026-01-22 10:36:47.683213373 +0000 UTC m=+0.341249553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.689635 4975 server.go:460] "Adding debug handlers to kubelet server" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.689666 4975 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.685636 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:29:51.880683871 +0000 UTC Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.698422 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.698547 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.700823 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706471 4975 factory.go:153] Registering CRI-O factory Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706506 4975 factory.go:221] Registration of the crio container factory successfully Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706596 4975 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706610 4975 factory.go:55] Registering systemd factory Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706617 4975 factory.go:221] Registration of the systemd container factory successfully Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706640 4975 factory.go:103] Registering Raw factory Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.706657 4975 manager.go:1196] Started watching for new ooms in manager Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.707311 4975 manager.go:319] Starting recovery of all containers Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709106 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709194 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709217 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709235 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709252 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709272 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709289 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709306 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709326 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709375 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709401 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709431 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709457 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709480 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709535 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709554 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709572 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709590 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709635 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709653 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709671 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709688 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709706 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709727 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709744 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709762 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709782 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709809 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709826 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709844 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709863 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709881 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709899 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709920 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709940 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709957 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709975 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.709993 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710010 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710028 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710046 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710068 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710126 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710154 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710176 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710199 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710222 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710247 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710270 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710295 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710323 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710388 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710425 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710454 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710481 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710511 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710539 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710588 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710613 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710636 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710663 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710687 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710710 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710730 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710749 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710765 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710783 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710800 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710830 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710849 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710867 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710886 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710904 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710927 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710960 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710977 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.710995 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711013 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711032 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711049 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711067 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711084 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711100 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711117 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711138 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711154 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711172 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711189 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711206 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711255 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711274 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711293 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711310 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711367 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711388 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711407 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711430 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711455 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711477 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711497 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711515 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711534 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711551 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711571 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711598 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711620 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711642 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711707 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711725 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711745 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711765 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711783 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711803 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711821 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711839 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711855 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711872 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711900 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711916 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711934 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711952 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711970 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.711989 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712008 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712025 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712042 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712060 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712080 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712100 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712117 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.712135 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713273 4975 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713316 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713374 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713398 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713425 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713450 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713475 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713496 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713513 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713531 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713548 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713566 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713584 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713601 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713618 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713637 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713654 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713672 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713688 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713707 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713723 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713752 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713773 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713792 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713809 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713826 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713844 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713871 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713889 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713905 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713923 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713940 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713958 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713975 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.713994 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714013 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714031 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714053 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714069 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714087 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714104 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714121 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714140 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714157 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714175 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714191 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714209 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714226 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714244 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714261 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714279 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714297 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714321 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714369 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714386 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714472 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714492 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714509 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714527 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714544 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714583 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714600 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714617 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714638 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714656 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714676 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714695 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714713 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714734 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714757 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714780 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714805 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714828 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714850 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714870 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714893 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714917 4975 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714940 4975 reconstruct.go:97] "Volume reconstruction finished" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.714955 4975 reconciler.go:26] "Reconciler: start to sync state" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.740128 4975 manager.go:324] Recovery completed Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.749578 4975 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.753513 4975 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.753560 4975 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.753581 4975 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.753691 4975 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.755599 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: W0122 10:36:47.756156 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.756252 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.757080 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.757129 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.757142 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.760235 4975 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.760258 4975 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.760280 4975 state_mem.go:36] "Initialized new in-memory state store" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.770493 4975 policy_none.go:49] "None policy: Start" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.771658 4975 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.771687 4975 state_mem.go:35] "Initializing new in-memory state store" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.789366 4975 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.821553 4975 manager.go:334] "Starting Device Plugin manager" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.822772 4975 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.823121 4975 server.go:79] "Starting device plugin registration server" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.823589 4975 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.823606 4975 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.823748 4975 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.823961 4975 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.823982 4975 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.830593 4975 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.854826 4975 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.854979 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.856464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.856533 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.856560 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.856741 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.857253 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.857371 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.857772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.857839 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.857863 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.858084 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.858201 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.858271 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.858880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.858926 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.858945 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859402 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859455 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859589 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859624 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859655 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859722 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.859758 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860087 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860862 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860890 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860905 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860962 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.860971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.861109 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.861346 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.861382 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862133 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862163 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862175 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862221 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862261 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862389 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.862414 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.863000 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.863023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.863030 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.901940 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.916922 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.916989 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917029 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917087 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917127 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917157 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917261 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917361 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917440 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917521 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917585 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917636 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917682 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917726 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.917772 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.924532 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.925980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.926026 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.926042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:47 crc kubenswrapper[4975]: I0122 10:36:47.926072 4975 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:36:47 crc kubenswrapper[4975]: E0122 10:36:47.926704 4975 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.018969 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019042 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019079 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019108 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019142 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019170 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019199 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019228 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019235 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019407 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019261 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019427 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019553 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019588 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019481 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019556 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019375 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019568 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019473 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020248 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020291 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.019467 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020368 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020411 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020451 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020509 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020591 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020642 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020682 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.020710 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.127355 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.128942 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.128994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.129011 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.129044 4975 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:36:48 crc kubenswrapper[4975]: E0122 10:36:48.129535 4975 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.208226 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.217594 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.236098 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.260003 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: W0122 10:36:48.262640 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-12977fcee66a752a32bb35c64ac0615b9443490671a9a6be2a8eec325241aad6 WatchSource:0}: Error finding container 12977fcee66a752a32bb35c64ac0615b9443490671a9a6be2a8eec325241aad6: Status 404 returned error can't find the container with id 12977fcee66a752a32bb35c64ac0615b9443490671a9a6be2a8eec325241aad6 Jan 22 10:36:48 crc kubenswrapper[4975]: W0122 10:36:48.263709 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3f50dc01d87996c879f62f18922dea35bf523250a03d196550e10a5f96f836b4 WatchSource:0}: Error finding container 3f50dc01d87996c879f62f18922dea35bf523250a03d196550e10a5f96f836b4: Status 404 returned error can't find the container with id 3f50dc01d87996c879f62f18922dea35bf523250a03d196550e10a5f96f836b4 Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.266232 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:48 crc kubenswrapper[4975]: W0122 10:36:48.273299 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0aa8ac837b3830ea984d645381c176805fbf4b9e6f21c61b8e37fe1c658e95ba WatchSource:0}: Error finding container 0aa8ac837b3830ea984d645381c176805fbf4b9e6f21c61b8e37fe1c658e95ba: Status 404 returned error can't find the container with id 0aa8ac837b3830ea984d645381c176805fbf4b9e6f21c61b8e37fe1c658e95ba Jan 22 10:36:48 crc kubenswrapper[4975]: W0122 10:36:48.280926 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1553eb5bc484b49e151dfe0522bcfa3af2ae0154bf12b338b605aeb31f43cfef WatchSource:0}: Error finding container 1553eb5bc484b49e151dfe0522bcfa3af2ae0154bf12b338b605aeb31f43cfef: Status 404 returned error can't find the container with id 1553eb5bc484b49e151dfe0522bcfa3af2ae0154bf12b338b605aeb31f43cfef Jan 22 10:36:48 crc kubenswrapper[4975]: W0122 10:36:48.293390 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9e5d79ace7f501bceb8e3fb0a72d2099cdddb7a96a860154fd76602a6c3fd9d1 WatchSource:0}: Error finding container 9e5d79ace7f501bceb8e3fb0a72d2099cdddb7a96a860154fd76602a6c3fd9d1: Status 404 returned error can't find the container with id 9e5d79ace7f501bceb8e3fb0a72d2099cdddb7a96a860154fd76602a6c3fd9d1 Jan 22 10:36:48 crc kubenswrapper[4975]: E0122 10:36:48.303481 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.529801 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.531222 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.531247 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.531255 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.531276 4975 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:36:48 crc kubenswrapper[4975]: E0122 10:36:48.531501 4975 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.684620 4975 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.698037 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:44:15.841545191 +0000 UTC Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.759650 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.759794 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1553eb5bc484b49e151dfe0522bcfa3af2ae0154bf12b338b605aeb31f43cfef"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.761666 4975 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b" exitCode=0 Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.761739 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.761767 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0aa8ac837b3830ea984d645381c176805fbf4b9e6f21c61b8e37fe1c658e95ba"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.761891 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763062 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763136 4975 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363" exitCode=0 Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763196 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763218 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3f50dc01d87996c879f62f18922dea35bf523250a03d196550e10a5f96f836b4"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.763451 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764675 4975 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a" exitCode=0 Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764737 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764773 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"12977fcee66a752a32bb35c64ac0615b9443490671a9a6be2a8eec325241aad6"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.764868 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.765643 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.765684 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.765701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.766736 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69" exitCode=0 Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.766770 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.766789 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e5d79ace7f501bceb8e3fb0a72d2099cdddb7a96a860154fd76602a6c3fd9d1"} Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.766908 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.767921 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.767947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.767962 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.772968 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.775198 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.775237 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:48 crc kubenswrapper[4975]: I0122 10:36:48.775249 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:48 crc kubenswrapper[4975]: W0122 10:36:48.869856 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:48 crc kubenswrapper[4975]: E0122 10:36:48.869988 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:49 crc kubenswrapper[4975]: W0122 10:36:49.039781 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:49 crc kubenswrapper[4975]: E0122 10:36:49.039866 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:49 crc kubenswrapper[4975]: E0122 10:36:49.104435 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Jan 22 10:36:49 crc kubenswrapper[4975]: W0122 10:36:49.206438 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:49 crc kubenswrapper[4975]: E0122 10:36:49.206593 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:49 crc kubenswrapper[4975]: W0122 10:36:49.269293 4975 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 22 10:36:49 crc kubenswrapper[4975]: E0122 10:36:49.269493 4975 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.332824 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.334257 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.334300 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.334314 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.334362 4975 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:36:49 crc kubenswrapper[4975]: E0122 10:36:49.334896 4975 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.668105 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.699106 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:06:13.883479275 +0000 UTC Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.772912 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.772959 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.772973 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.773090 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.776540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.776585 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.776597 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.781471 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.781538 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.781561 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.781577 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.788105 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.788162 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.788177 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.788285 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.792249 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.792282 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.792296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.794407 4975 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c" exitCode=0 Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.794462 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.794556 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.795368 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.795394 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.795405 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.808441 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54"} Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.808579 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.809565 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.809620 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:49 crc kubenswrapper[4975]: I0122 10:36:49.809635 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.079707 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.699820 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:16:38.499035077 +0000 UTC Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.815701 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3"} Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.815828 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.816960 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.817014 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.817031 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.818178 4975 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11" exitCode=0 Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.818264 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11"} Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.818298 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.818454 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.818571 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.819614 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.819660 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.819678 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.820027 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.820078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.820097 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.820145 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.820201 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.820226 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.935534 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.937410 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.937467 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.937487 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:50 crc kubenswrapper[4975]: I0122 10:36:50.937526 4975 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.700480 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:55:22.394871289 +0000 UTC Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.826154 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28"} Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.826204 4975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.826228 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369"} Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.826268 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d"} Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.826288 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.827471 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.827539 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:51 crc kubenswrapper[4975]: I0122 10:36:51.827558 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.233811 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.234083 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.235872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.235929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.235946 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.675432 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.700873 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:54:17.25299995 +0000 UTC Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.836099 4975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.836117 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787"} Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.836161 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.836182 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa"} Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.836162 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.837509 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.837559 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.837582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.837600 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.837631 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.837679 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:52 crc kubenswrapper[4975]: I0122 10:36:52.953976 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.463059 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.701302 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:41:11.893894354 +0000 UTC Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.711798 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.712120 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.713983 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.714063 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.714087 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.838865 4975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.838949 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.838963 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.840464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.840526 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.840548 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.840733 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.840791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:53 crc kubenswrapper[4975]: I0122 10:36:53.840816 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.083124 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.083428 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.084819 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.084899 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.084936 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.091771 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.702396 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:38:25.390629914 +0000 UTC Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.817461 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.841087 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.841169 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.842022 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.842055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.842065 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.842443 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.842505 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:54 crc kubenswrapper[4975]: I0122 10:36:54.842529 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.064628 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.703447 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:24:56.116371673 +0000 UTC Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.844697 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.844719 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.846246 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.846296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.846322 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.846374 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.846398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:55 crc kubenswrapper[4975]: I0122 10:36:55.846406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:56 crc kubenswrapper[4975]: I0122 10:36:56.703912 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:53:58.713889123 +0000 UTC Jan 22 10:36:56 crc kubenswrapper[4975]: I0122 10:36:56.712266 4975 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 10:36:56 crc kubenswrapper[4975]: I0122 10:36:56.712396 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:36:57 crc kubenswrapper[4975]: I0122 10:36:57.704875 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:23:15.687420221 +0000 UTC Jan 22 10:36:57 crc kubenswrapper[4975]: I0122 10:36:57.812943 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:36:57 crc kubenswrapper[4975]: I0122 10:36:57.813290 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:36:57 crc kubenswrapper[4975]: I0122 10:36:57.815371 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:36:57 crc kubenswrapper[4975]: I0122 10:36:57.815431 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:36:57 crc kubenswrapper[4975]: I0122 10:36:57.815451 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:36:57 crc kubenswrapper[4975]: E0122 10:36:57.831171 4975 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 10:36:58 crc kubenswrapper[4975]: I0122 10:36:58.705507 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:56:15.278186881 +0000 UTC Jan 22 10:36:59 crc kubenswrapper[4975]: E0122 10:36:59.669978 4975 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 10:36:59 crc kubenswrapper[4975]: I0122 10:36:59.686290 4975 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 10:36:59 crc kubenswrapper[4975]: I0122 10:36:59.706858 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:37:22.421825139 +0000 UTC Jan 22 10:37:00 crc kubenswrapper[4975]: E0122 10:37:00.705502 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 22 10:37:00 crc kubenswrapper[4975]: I0122 10:37:00.708840 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:31:04.054755489 +0000 UTC Jan 22 10:37:00 crc kubenswrapper[4975]: I0122 10:37:00.756168 4975 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 10:37:00 crc kubenswrapper[4975]: I0122 10:37:00.756226 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 10:37:00 crc kubenswrapper[4975]: I0122 10:37:00.760600 4975 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 10:37:00 crc kubenswrapper[4975]: I0122 10:37:00.761282 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 10:37:01 crc kubenswrapper[4975]: I0122 10:37:01.709992 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:51:37.890832689 +0000 UTC Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.681642 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.681812 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.683077 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.683149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.683172 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.689274 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.710615 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:32:19.247162177 +0000 UTC Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.863738 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.864803 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.864862 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:02 crc kubenswrapper[4975]: I0122 10:37:02.864883 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:03 crc kubenswrapper[4975]: I0122 10:37:03.711775 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:50:15.043280789 +0000 UTC Jan 22 10:37:03 crc kubenswrapper[4975]: I0122 10:37:03.800541 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 10:37:03 crc kubenswrapper[4975]: I0122 10:37:03.819982 4975 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 10:37:04 crc kubenswrapper[4975]: I0122 10:37:04.712558 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:16:45.214828393 +0000 UTC Jan 22 10:37:04 crc kubenswrapper[4975]: I0122 10:37:04.824781 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:37:04 crc kubenswrapper[4975]: I0122 10:37:04.824982 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:37:04 crc kubenswrapper[4975]: I0122 10:37:04.826567 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:04 crc kubenswrapper[4975]: I0122 10:37:04.826623 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:04 crc kubenswrapper[4975]: I0122 10:37:04.826642 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.090219 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.090508 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.092092 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.092164 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.092185 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.103936 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.713292 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:26:43.031250454 +0000 UTC Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.757185 4975 trace.go:236] Trace[1999823027]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:36:51.862) (total time: 13894ms): Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[1999823027]: ---"Objects listed" error: 13894ms (10:37:05.757) Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[1999823027]: [13.894483242s] [13.894483242s] END Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.757231 4975 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.757694 4975 trace.go:236] Trace[426167531]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:36:51.652) (total time: 14105ms): Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[426167531]: ---"Objects listed" error: 14105ms (10:37:05.757) Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[426167531]: [14.105159298s] [14.105159298s] END Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.757714 4975 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.757812 4975 trace.go:236] Trace[191537536]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:36:50.920) (total time: 14837ms): Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[191537536]: ---"Objects listed" error: 14837ms (10:37:05.757) Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[191537536]: [14.8374023s] [14.8374023s] END Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.757851 4975 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.759386 4975 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.760440 4975 trace.go:236] Trace[1489126338]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:36:51.464) (total time: 14295ms): Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[1489126338]: ---"Objects listed" error: 14295ms (10:37:05.760) Jan 22 10:37:05 crc kubenswrapper[4975]: Trace[1489126338]: [14.295974648s] [14.295974648s] END Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.760482 4975 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 10:37:05 crc kubenswrapper[4975]: E0122 10:37:05.761125 4975 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.809206 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.815198 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.866409 4975 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55818->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.866495 4975 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41508->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.866500 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55818->192.168.126.11:17697: read: connection reset by peer" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.866577 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41508->192.168.126.11:17697: read: connection reset by peer" Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.866909 4975 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 10:37:05 crc kubenswrapper[4975]: I0122 10:37:05.866937 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 10:37:05 crc kubenswrapper[4975]: E0122 10:37:05.887549 4975 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.688310 4975 apiserver.go:52] "Watching apiserver" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.693155 4975 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.693607 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.694070 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.694173 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.694208 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.694471 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.694550 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.694689 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.695485 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.695630 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.695718 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.696759 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.698115 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.698169 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.698195 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.698358 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.698697 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.699149 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.700805 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.701706 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.713607 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:41:22.898517976 +0000 UTC Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.739365 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.755067 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.770673 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.786553 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.791648 4975 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.818055 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.835082 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.854607 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866539 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866653 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866707 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866818 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866849 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866880 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866908 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866937 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.866964 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867033 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867371 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867457 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867546 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867992 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868055 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868104 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868151 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868196 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868241 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868288 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868382 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868437 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868486 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868536 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868584 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868637 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868682 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868729 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868775 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868829 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868876 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868924 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868976 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869023 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869071 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869189 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869242 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869384 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869447 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869567 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869615 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869658 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869726 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869785 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869845 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869905 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869949 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869996 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870117 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870164 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870208 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870254 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870298 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870376 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870424 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870522 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870570 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870616 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870661 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870709 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870930 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871008 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871059 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867453 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867455 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867542 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867632 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.867789 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868084 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868122 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868130 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868580 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868826 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868876 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868880 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.868900 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869178 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869236 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869284 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869542 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871609 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871639 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869756 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869783 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871842 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869798 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869790 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870150 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870311 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870677 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.870917 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871056 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871029 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872004 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872354 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872396 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.869702 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872765 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.871105 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872872 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872936 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872943 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.872990 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873042 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873084 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873123 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873161 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873203 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873237 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873275 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873311 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873381 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873416 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873450 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873483 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873519 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873551 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873585 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873621 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.874987 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875066 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875105 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875141 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875180 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875218 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875311 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875386 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875420 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875456 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875489 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875523 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875557 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875590 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875623 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875658 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876091 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876147 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876185 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876217 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876253 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876285 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876317 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876376 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876414 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876446 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876479 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876512 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876548 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876580 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876614 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.873788 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.874654 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.874967 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875007 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875636 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875842 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875858 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.875923 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876419 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876434 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.877598 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.878254 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.878439 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.879821 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.879826 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.880069 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.880097 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.883837 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884048 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884226 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.876673 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884449 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884546 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884606 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884678 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884757 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884856 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.880658 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884921 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.884945 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.885012 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.885355 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.885078 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.885981 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.885918 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.885965 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.886223 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.886051 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.886411 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.886130 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.886729 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.886958 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887087 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888870 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888929 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888980 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889032 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889073 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889120 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889165 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889209 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889254 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889423 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889634 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889685 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889740 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889825 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889871 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889910 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889955 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889999 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890039 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890085 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890127 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890168 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890211 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890254 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890301 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890373 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887204 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887359 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887415 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887731 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887639 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887889 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.887810 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888067 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888104 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888266 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888457 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888588 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888746 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888611 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889084 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.891733 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889429 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889529 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889633 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889708 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889709 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.889738 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890089 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890266 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890371 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890371 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.888019 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.890688 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.891003 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.891169 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.893556 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.895318 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.896910 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.896943 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.896970 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897289 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897602 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897698 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897730 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897803 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897855 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897883 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.897980 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898040 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898095 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898177 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898225 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898246 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898327 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898455 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898533 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898593 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.898628 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.898937 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:37:07.398898685 +0000 UTC m=+20.056934835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.892125 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899072 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899076 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899150 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899191 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899196 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899213 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899417 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899459 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899497 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899634 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899650 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899683 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899722 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899760 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899795 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899788 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899832 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899865 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899902 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899941 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.899975 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900010 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900046 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900189 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900214 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900406 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900229 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900545 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900606 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900762 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900838 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900894 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900944 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900995 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901050 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901095 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901138 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901189 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901245 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901290 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901369 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901421 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901524 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901586 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901629 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901668 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901707 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901743 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901783 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900327 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900491 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900573 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901858 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.900945 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901068 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901181 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901425 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901491 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901118 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902144 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.901824 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902690 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902794 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902758 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902955 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902999 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903021 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903135 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903182 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903227 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903395 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903482 4975 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903514 4975 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903543 4975 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903567 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903588 4975 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903591 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903609 4975 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903670 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903694 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903714 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.903734 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903833 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.903862 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:07.403828498 +0000 UTC m=+20.061864718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903910 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903946 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.903981 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904012 4975 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904040 4975 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904069 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904056 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904098 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904126 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904156 4975 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904186 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904214 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904241 4975 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904264 4975 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904282 4975 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904305 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904325 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904402 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904432 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904461 4975 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904489 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904517 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904545 4975 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904566 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904585 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904604 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904627 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904648 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904670 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904691 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904718 4975 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904747 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904775 4975 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904800 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904824 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904849 4975 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904873 4975 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904954 4975 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.904986 4975 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905011 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905038 4975 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905064 4975 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905089 4975 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905113 4975 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905137 4975 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905164 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905191 4975 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905217 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905242 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905266 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905289 4975 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905317 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905388 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905415 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905442 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905466 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905489 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905512 4975 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905538 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905567 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905594 4975 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905619 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905645 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905673 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905698 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905725 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905754 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905784 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905813 4975 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905840 4975 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905866 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905895 4975 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905920 4975 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905949 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905975 4975 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.905999 4975 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906026 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906053 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906078 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906105 4975 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906132 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906161 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906189 4975 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906216 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906244 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906272 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906300 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906362 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906394 4975 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906422 4975 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906450 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906476 4975 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906502 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906528 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906556 4975 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906580 4975 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906605 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906631 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906656 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.906671 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.906831 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:07.406793768 +0000 UTC m=+20.064829928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906680 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906892 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906926 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906957 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.906985 4975 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907014 4975 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907042 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907069 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907100 4975 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907131 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907194 4975 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907226 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907255 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907281 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907310 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907381 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907409 4975 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907436 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907463 4975 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907488 4975 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907514 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907542 4975 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.907886 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.908487 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.908503 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.908937 4975 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.912886 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3" exitCode=255 Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.913544 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3"} Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.915960 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.916492 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.916674 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.916756 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.917277 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.902715 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.924957 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.928566 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.931032 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.931045 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.932640 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.932790 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.933250 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.933586 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934022 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934058 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934082 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934152 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934172 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:07.434143747 +0000 UTC m=+20.092179887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934186 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934210 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:06 crc kubenswrapper[4975]: E0122 10:37:06.934291 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:07.43426485 +0000 UTC m=+20.092300960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.934390 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.934642 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.935496 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.935659 4975 scope.go:117] "RemoveContainer" containerID="dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.936676 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.937317 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.937728 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.938392 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.938426 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.938729 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.938963 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.939153 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.940115 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.941485 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.941950 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.942060 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.942321 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.942739 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.942858 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.943454 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.944659 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.945374 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.945764 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.945829 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.945914 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.946326 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.947716 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.948049 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.948181 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.949175 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.949369 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.949415 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.949975 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.949975 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.950322 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.950844 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.950877 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.951161 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.951382 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.951862 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.952275 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.952871 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.953251 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.953783 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.954030 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.956668 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.956801 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.959748 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.960897 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.961610 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.962777 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.963090 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.966389 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.972697 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.984292 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.987420 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.987935 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:06 crc kubenswrapper[4975]: I0122 10:37:06.999941 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009637 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009679 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009722 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009737 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009751 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009755 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009701 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009764 4975 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009829 4975 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009850 4975 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009867 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009879 4975 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009892 4975 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009907 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009919 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009933 4975 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009945 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009958 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009971 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009984 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009914 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.009996 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010055 4975 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010074 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010092 4975 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010105 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010117 4975 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010133 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010153 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010172 4975 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010188 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010201 4975 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010213 4975 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010225 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010241 4975 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010258 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010277 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010298 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010312 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010324 4975 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010362 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010380 4975 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010398 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010414 4975 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010430 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010448 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010464 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010480 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010496 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010512 4975 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010528 4975 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010544 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010562 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010579 4975 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010595 4975 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010611 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010626 4975 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010642 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010658 4975 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010674 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010692 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010708 4975 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010760 4975 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010778 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010796 4975 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010818 4975 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010833 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.010849 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.016236 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.026794 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.033651 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.039654 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.047555 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.072724 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:37:07 crc kubenswrapper[4975]: W0122 10:37:07.087454 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-aa16fbdda0b4d4abbb6170639f08824a6431adabc82beeb11864cfe9d81b2473 WatchSource:0}: Error finding container aa16fbdda0b4d4abbb6170639f08824a6431adabc82beeb11864cfe9d81b2473: Status 404 returned error can't find the container with id aa16fbdda0b4d4abbb6170639f08824a6431adabc82beeb11864cfe9d81b2473 Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.418106 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.418394 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:37:08.418362377 +0000 UTC m=+21.076398487 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.418690 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.418735 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.418835 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.418854 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.418895 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:08.418885341 +0000 UTC m=+21.076921451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.418927 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:08.418905361 +0000 UTC m=+21.076941571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.519533 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.520043 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.519836 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.520286 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.520308 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.520460 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:08.520435133 +0000 UTC m=+21.178471283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.520199 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.520773 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.520895 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.521064 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:08.521043489 +0000 UTC m=+21.179079639 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.714968 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:11:03.122430893 +0000 UTC Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.754717 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.754793 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.754942 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:07 crc kubenswrapper[4975]: E0122 10:37:07.755163 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.765102 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.765724 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.766627 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.767272 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.767946 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.769509 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.770156 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.770814 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.771967 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.772548 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.772832 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.773508 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.774250 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.775244 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.775835 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.776828 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.777466 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.778087 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.778987 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.779615 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.780654 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.781134 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.781849 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.782783 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.783497 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.784449 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.785104 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.786215 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.786773 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.787869 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.788389 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.788894 4975 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.789003 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.791252 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.791790 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.792647 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.792765 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.794529 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.795264 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.796346 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.797239 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.798516 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.799141 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.799930 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.801221 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.802650 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.803245 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.804481 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.805582 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.807052 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.807814 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.808931 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.809592 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.809833 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.810299 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.811645 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.812627 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.823568 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.844876 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.862267 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.889826 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.917273 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eed6da6e1224ca8d60eff1316f3505004abde8b7828a0be660fa4c9e3e8832e1"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.930657 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.930792 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.930845 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.930857 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aa16fbdda0b4d4abbb6170639f08824a6431adabc82beeb11864cfe9d81b2473"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.932225 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.932269 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9f434400f713428535a54098d78cc94fba489f0455884671d775af1a9a0ab1a9"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.934697 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.936369 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5"} Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.936551 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.946966 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.971515 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:07 crc kubenswrapper[4975]: I0122 10:37:07.990291 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.005561 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.016879 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.026162 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.039869 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.053116 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.064536 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.077799 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.435591 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.435666 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.435704 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.435778 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:37:10.435755937 +0000 UTC m=+23.093792057 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.435791 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.435841 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:10.435829529 +0000 UTC m=+23.093865639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.435940 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.436104 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:10.436069876 +0000 UTC m=+23.094106146 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.536995 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.537073 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537265 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537301 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537265 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537319 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537349 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537364 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537431 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:10.537405713 +0000 UTC m=+23.195441833 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.537485 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:10.537476945 +0000 UTC m=+23.195513055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.716676 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:44:03.808527438 +0000 UTC Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.754147 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:08 crc kubenswrapper[4975]: E0122 10:37:08.754298 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.962027 4975 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.963895 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.963950 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.963963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.964025 4975 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.975962 4975 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.976216 4975 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.977223 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.977274 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.977291 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.977320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:08 crc kubenswrapper[4975]: I0122 10:37:08.977383 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:08Z","lastTransitionTime":"2026-01-22T10:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.021199 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.026005 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.026062 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.026075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.026099 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.026120 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.046845 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.052918 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.052973 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.052988 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.053009 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.053030 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.076447 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.084382 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.084431 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.084443 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.084462 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.084475 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.118416 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.123958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.124012 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.124022 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.124044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.124059 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.143639 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.143790 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.145911 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.145942 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.145953 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.145978 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.145990 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.249839 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.249915 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.249939 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.249974 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.250001 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.352967 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.353013 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.353024 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.353041 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.353053 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.456573 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.456643 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.456676 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.456722 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.456742 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.559308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.559401 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.559418 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.559441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.559458 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.661654 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.661705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.661717 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.661735 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.661747 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.716919 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:26:31.98134011 +0000 UTC Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.754844 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.754921 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.755017 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:09 crc kubenswrapper[4975]: E0122 10:37:09.755101 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.763521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.763820 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.763920 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.764020 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.764099 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.866228 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.866277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.866289 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.866307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.866318 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.944972 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.962932 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.968968 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.969018 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.969032 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.969055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.969072 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:09Z","lastTransitionTime":"2026-01-22T10:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:09 crc kubenswrapper[4975]: I0122 10:37:09.986219 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:09Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.003182 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.020440 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.044846 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.066154 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.072250 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.072290 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.072304 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.072324 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.072364 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.086297 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.119315 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.138888 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.176778 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.176821 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.176832 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.176851 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.176861 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.280908 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.280958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.280967 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.280982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.280992 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.383812 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.383874 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.383891 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.383914 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.383931 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.460826 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.460931 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.460979 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.461129 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:37:14.461066003 +0000 UTC m=+27.119102123 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.461160 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.461269 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:14.461253268 +0000 UTC m=+27.119289548 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.461308 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.461511 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:14.461473744 +0000 UTC m=+27.119509894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.487596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.487631 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.487641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.487654 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.487664 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.561888 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.561967 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562102 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562138 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562150 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562161 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562187 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562206 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562210 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:14.562192065 +0000 UTC m=+27.220228175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.562282 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:14.562259086 +0000 UTC m=+27.220295206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.590069 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.590104 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.590114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.590128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.590150 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.611311 4975 csr.go:261] certificate signing request csr-8ljtf is approved, waiting to be issued Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.628753 4975 csr.go:257] certificate signing request csr-8ljtf is issued Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.692778 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.692832 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.692842 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.692861 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.692928 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.717969 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:17:31.023922626 +0000 UTC Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.754281 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:10 crc kubenswrapper[4975]: E0122 10:37:10.754427 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.794925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.794962 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.794971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.794984 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.794993 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.897549 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.897587 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.897596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.897612 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.897622 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.950908 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-cqvxp"] Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.951259 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.953055 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.953308 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.954809 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.966654 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8dv\" (UniqueName: \"kubernetes.io/projected/e2f714de-8040-41dc-b7d1-735a879df670-kube-api-access-jh8dv\") pod \"node-resolver-cqvxp\" (UID: \"e2f714de-8040-41dc-b7d1-735a879df670\") " pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.966766 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e2f714de-8040-41dc-b7d1-735a879df670-hosts-file\") pod \"node-resolver-cqvxp\" (UID: \"e2f714de-8040-41dc-b7d1-735a879df670\") " pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.973004 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.995055 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.999815 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.999860 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.999872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.999890 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:10 crc kubenswrapper[4975]: I0122 10:37:10.999904 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:10Z","lastTransitionTime":"2026-01-22T10:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.013612 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.026715 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.038781 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.052966 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.067387 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e2f714de-8040-41dc-b7d1-735a879df670-hosts-file\") pod \"node-resolver-cqvxp\" (UID: \"e2f714de-8040-41dc-b7d1-735a879df670\") " pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.067487 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh8dv\" (UniqueName: \"kubernetes.io/projected/e2f714de-8040-41dc-b7d1-735a879df670-kube-api-access-jh8dv\") pod \"node-resolver-cqvxp\" (UID: \"e2f714de-8040-41dc-b7d1-735a879df670\") " pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.067603 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e2f714de-8040-41dc-b7d1-735a879df670-hosts-file\") pod \"node-resolver-cqvxp\" (UID: \"e2f714de-8040-41dc-b7d1-735a879df670\") " pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.076390 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.101489 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh8dv\" (UniqueName: \"kubernetes.io/projected/e2f714de-8040-41dc-b7d1-735a879df670-kube-api-access-jh8dv\") pod \"node-resolver-cqvxp\" (UID: \"e2f714de-8040-41dc-b7d1-735a879df670\") " pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.102248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.102302 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.102313 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.102345 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.102356 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.109542 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.161501 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.187122 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.205933 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.206230 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.206296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.206389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.206459 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.266256 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cqvxp" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.309355 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.309608 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.309680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.309753 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.309828 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.363724 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fmw72"] Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.364087 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qmklh"] Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.364352 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.364432 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.364831 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t5gcx"] Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.365615 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7c9rx"] Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.365847 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.367044 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.370671 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.372285 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.372312 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.372380 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.372676 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.372768 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.373181 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.373236 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.373347 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.373623 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.373880 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.374312 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.375953 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.376155 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.376156 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.376320 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.376377 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.376581 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.376838 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.403650 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.413223 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.413269 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.413279 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.413294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.413311 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.426877 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.441587 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.453537 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.469599 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.470869 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscbw\" (UniqueName: \"kubernetes.io/projected/c8f7b040-017f-4aa7-ae02-cda6b7a89026-kube-api-access-pscbw\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.470906 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-system-cni-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.470932 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-os-release\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.470954 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-system-cni-dir\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.470969 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cnibin\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.470985 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-slash\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471003 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c717ed91-9c30-47bc-b814-ceb1b25939a8-mcd-auth-proxy-config\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471025 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-ovn\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471047 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-env-overrides\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471083 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-bin\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471104 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c717ed91-9c30-47bc-b814-ceb1b25939a8-proxy-tls\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471124 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-var-lib-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471146 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471162 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-conf-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471234 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwgl\" (UniqueName: \"kubernetes.io/projected/c1de894a-1cf5-47fa-9fc5-0a52783fe152-kube-api-access-9mwgl\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471297 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-systemd\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471345 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-os-release\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471369 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hlg\" (UniqueName: \"kubernetes.io/projected/c717ed91-9c30-47bc-b814-ceb1b25939a8-kube-api-access-l7hlg\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471395 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471422 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovn-node-metrics-cert\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471443 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-k8s-cni-cncf-io\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471465 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-cni-multus\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471487 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-daemon-config\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471508 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-multus-certs\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471532 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-node-log\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471552 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhnd8\" (UniqueName: \"kubernetes.io/projected/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-kube-api-access-zhnd8\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471573 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471597 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-log-socket\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471620 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-script-lib\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471643 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-cnibin\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471664 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-kubelet\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471687 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-etc-kubernetes\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471734 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-kubelet\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471756 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-systemd-units\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471781 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-etc-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471805 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-ovn-kubernetes\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471825 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-cni-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471864 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-socket-dir-parent\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471885 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-hostroot\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471910 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471933 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-netns\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471956 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-netd\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471977 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-cni-binary-copy\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.471999 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-cni-bin\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.472020 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c717ed91-9c30-47bc-b814-ceb1b25939a8-rootfs\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.472046 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.472067 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-netns\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.472087 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-config\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.480668 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.494710 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.506240 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.515473 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.515518 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.515528 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.515544 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.515556 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.518376 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.535249 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.549699 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.568512 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572801 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-socket-dir-parent\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572845 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-hostroot\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572872 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572898 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-netns\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572922 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-netd\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572945 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-cni-binary-copy\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572967 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-cni-bin\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572970 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-hostroot\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573021 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c717ed91-9c30-47bc-b814-ceb1b25939a8-rootfs\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.572989 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c717ed91-9c30-47bc-b814-ceb1b25939a8-rootfs\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573030 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-netns\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573034 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-netd\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573068 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573077 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-cni-bin\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573093 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-netns\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573063 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-socket-dir-parent\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573113 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-config\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573132 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-netns\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573135 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-system-cni-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573165 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-os-release\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573200 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pscbw\" (UniqueName: \"kubernetes.io/projected/c8f7b040-017f-4aa7-ae02-cda6b7a89026-kube-api-access-pscbw\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573222 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-slash\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573241 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c717ed91-9c30-47bc-b814-ceb1b25939a8-mcd-auth-proxy-config\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573247 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-system-cni-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573260 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-system-cni-dir\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573281 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cnibin\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573306 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-ovn\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573347 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-env-overrides\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573390 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-var-lib-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573409 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-bin\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573428 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c717ed91-9c30-47bc-b814-ceb1b25939a8-proxy-tls\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573450 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573468 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-conf-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573489 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mwgl\" (UniqueName: \"kubernetes.io/projected/c1de894a-1cf5-47fa-9fc5-0a52783fe152-kube-api-access-9mwgl\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573506 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-os-release\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573509 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-systemd\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573537 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-os-release\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573553 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7hlg\" (UniqueName: \"kubernetes.io/projected/c717ed91-9c30-47bc-b814-ceb1b25939a8-kube-api-access-l7hlg\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573571 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-node-log\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573586 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573601 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovn-node-metrics-cert\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573614 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-k8s-cni-cncf-io\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573628 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-cni-multus\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573644 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-daemon-config\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573659 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-multus-certs\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573675 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhnd8\" (UniqueName: \"kubernetes.io/projected/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-kube-api-access-zhnd8\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573690 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573701 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573705 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-log-socket\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573751 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-os-release\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573757 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-script-lib\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573776 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-cnibin\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573793 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-kubelet\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573809 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-etc-kubernetes\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573842 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-kubelet\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573858 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-systemd-units\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573556 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-slash\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573873 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-etc-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573889 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-node-log\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573891 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-ovn-kubernetes\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573912 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-ovn-kubernetes\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573726 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-log-socket\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573920 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-cni-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573931 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-var-lib-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573970 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-system-cni-dir\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573976 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cnibin\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573999 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8f7b040-017f-4aa7-ae02-cda6b7a89026-cni-binary-copy\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574012 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574039 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-ovn\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574073 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-config\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574110 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573537 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-systemd\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574154 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-etc-kubernetes\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574190 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-cnibin\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574192 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-bin\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574244 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-kubelet\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574275 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-systemd-units\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574306 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-kubelet\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574311 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c717ed91-9c30-47bc-b814-ceb1b25939a8-mcd-auth-proxy-config\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574362 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-etc-openvswitch\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574385 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-conf-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574406 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-multus-certs\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574435 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-script-lib\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574442 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-run-k8s-cni-cncf-io\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574463 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-host-var-lib-cni-multus\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574489 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-env-overrides\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.573890 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-cni-binary-copy\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574702 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-cni-dir\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.574978 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-multus-daemon-config\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.579892 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovn-node-metrics-cert\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.580003 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c717ed91-9c30-47bc-b814-ceb1b25939a8-proxy-tls\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.586487 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.590945 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhnd8\" (UniqueName: \"kubernetes.io/projected/9fcb4bcb-0a79-4420-bb51-e4585d3306a9-kube-api-access-zhnd8\") pod \"multus-qmklh\" (UID: \"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\") " pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.593593 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mwgl\" (UniqueName: \"kubernetes.io/projected/c1de894a-1cf5-47fa-9fc5-0a52783fe152-kube-api-access-9mwgl\") pod \"ovnkube-node-t5gcx\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.606026 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7hlg\" (UniqueName: \"kubernetes.io/projected/c717ed91-9c30-47bc-b814-ceb1b25939a8-kube-api-access-l7hlg\") pod \"machine-config-daemon-fmw72\" (UID: \"c717ed91-9c30-47bc-b814-ceb1b25939a8\") " pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.606372 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pscbw\" (UniqueName: \"kubernetes.io/projected/c8f7b040-017f-4aa7-ae02-cda6b7a89026-kube-api-access-pscbw\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.606714 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.618355 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.618396 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.618409 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.618427 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.618439 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.630540 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 10:32:10 +0000 UTC, rotation deadline is 2026-10-14 11:19:45.09375704 +0000 UTC Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.630629 4975 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6360h42m33.463130046s for next certificate rotation Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.637241 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.660234 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.679837 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qmklh" Jan 22 10:37:11 crc kubenswrapper[4975]: W0122 10:37:11.692077 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fcb4bcb_0a79_4420_bb51_e4585d3306a9.slice/crio-ef931af2e43df3e4ea3d2ca00badcab23fa176e1a5a1ad6a4fa6e8cc1dbf7e50 WatchSource:0}: Error finding container ef931af2e43df3e4ea3d2ca00badcab23fa176e1a5a1ad6a4fa6e8cc1dbf7e50: Status 404 returned error can't find the container with id ef931af2e43df3e4ea3d2ca00badcab23fa176e1a5a1ad6a4fa6e8cc1dbf7e50 Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.693919 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.699745 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.707456 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.709277 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.718691 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:44:36.982958912 +0000 UTC Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.719965 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.719994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.720003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.720018 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.720028 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.721479 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.734393 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.745987 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.754514 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.754553 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:11 crc kubenswrapper[4975]: E0122 10:37:11.754628 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:11 crc kubenswrapper[4975]: E0122 10:37:11.754786 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.760500 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.771712 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.791220 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.794427 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8f7b040-017f-4aa7-ae02-cda6b7a89026-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7c9rx\" (UID: \"c8f7b040-017f-4aa7-ae02-cda6b7a89026\") " pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.813524 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.823430 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.823460 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.823470 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.823952 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.823974 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.828669 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.925791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.925831 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.925844 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.925863 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.925877 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:11Z","lastTransitionTime":"2026-01-22T10:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.950996 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"7071e7531b74fed9b8fc0bf055a939dbdc460c0100c72c2f42f6274c1cd9c941"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.952121 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.952147 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"609a0c4b49a23bf035b02035014f26d5697b280d3b976f765ee49364d080c3bc"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.953463 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerStarted","Data":"6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.953619 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerStarted","Data":"ef931af2e43df3e4ea3d2ca00badcab23fa176e1a5a1ad6a4fa6e8cc1dbf7e50"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.954594 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cqvxp" event={"ID":"e2f714de-8040-41dc-b7d1-735a879df670","Type":"ContainerStarted","Data":"d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.954724 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cqvxp" event={"ID":"e2f714de-8040-41dc-b7d1-735a879df670","Type":"ContainerStarted","Data":"c4472914f5eac8bdf44fed68720cbf22f687afe2fa7d26e0414dbd0c4f2d9f49"} Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.971105 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:11 crc kubenswrapper[4975]: I0122 10:37:11.987650 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.001745 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.014070 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.024038 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: W0122 10:37:12.025708 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f7b040_017f_4aa7_ae02_cda6b7a89026.slice/crio-6d2bf65663be6be0e96e272305ffbf8786117905b30959037f54f3ae28440667 WatchSource:0}: Error finding container 6d2bf65663be6be0e96e272305ffbf8786117905b30959037f54f3ae28440667: Status 404 returned error can't find the container with id 6d2bf65663be6be0e96e272305ffbf8786117905b30959037f54f3ae28440667 Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.027308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.027359 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.027371 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.027387 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.027399 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.036838 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.048468 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.059854 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.071599 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.083649 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.094018 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.108955 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.125817 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.130366 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.130425 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.130444 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.130470 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.130487 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.152466 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.177398 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.238486 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.238540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.238558 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.238582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.238600 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.352181 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.352254 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.352264 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.352278 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.352287 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.455729 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.455767 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.455779 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.455797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.455810 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.558605 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.558664 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.558680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.558701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.558715 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.660992 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.661043 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.661054 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.661072 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.661083 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.718961 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:07:24.460260309 +0000 UTC Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.754919 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:12 crc kubenswrapper[4975]: E0122 10:37:12.755042 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.762893 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.762925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.762934 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.762946 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.762954 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.865195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.865221 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.865230 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.865242 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.865250 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.919812 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zdgbb"] Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.920153 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.922908 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.923088 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.923189 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.923289 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.946710 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.958208 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" exitCode=0 Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.958279 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.960210 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.960387 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.961867 4975 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b040-017f-4aa7-ae02-cda6b7a89026" containerID="a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3" exitCode=0 Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.961941 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerDied","Data":"a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.961976 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerStarted","Data":"6d2bf65663be6be0e96e272305ffbf8786117905b30959037f54f3ae28440667"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.965015 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.967138 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.967167 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.967176 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.967189 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.967199 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:12Z","lastTransitionTime":"2026-01-22T10:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.985849 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:12Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.987478 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f152e5a-dd64-45fc-9a8f-31652989056a-serviceca\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.987548 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lclx5\" (UniqueName: \"kubernetes.io/projected/2f152e5a-dd64-45fc-9a8f-31652989056a-kube-api-access-lclx5\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:12 crc kubenswrapper[4975]: I0122 10:37:12.987626 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f152e5a-dd64-45fc-9a8f-31652989056a-host\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.003917 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.018577 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.029520 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.042652 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.057456 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.069406 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.069955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.070005 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.070023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.070040 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.070050 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.088923 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f152e5a-dd64-45fc-9a8f-31652989056a-host\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.089004 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f152e5a-dd64-45fc-9a8f-31652989056a-serviceca\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.089028 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lclx5\" (UniqueName: \"kubernetes.io/projected/2f152e5a-dd64-45fc-9a8f-31652989056a-kube-api-access-lclx5\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.089074 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f152e5a-dd64-45fc-9a8f-31652989056a-host\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.089999 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f152e5a-dd64-45fc-9a8f-31652989056a-serviceca\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.097953 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.113263 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.114539 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lclx5\" (UniqueName: \"kubernetes.io/projected/2f152e5a-dd64-45fc-9a8f-31652989056a-kube-api-access-lclx5\") pod \"node-ca-zdgbb\" (UID: \"2f152e5a-dd64-45fc-9a8f-31652989056a\") " pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.131357 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.146493 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.160785 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.172742 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.173061 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.173080 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.173105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.173122 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.177819 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.192210 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.205517 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.225777 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.237569 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.251194 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.259522 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zdgbb" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.263667 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.275798 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.275840 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.275852 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.275869 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.275881 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: W0122 10:37:13.277403 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f152e5a_dd64_45fc_9a8f_31652989056a.slice/crio-0f8fbe72b0837a2c27129135c28e442109a100856f9f493e61f9430ba161103e WatchSource:0}: Error finding container 0f8fbe72b0837a2c27129135c28e442109a100856f9f493e61f9430ba161103e: Status 404 returned error can't find the container with id 0f8fbe72b0837a2c27129135c28e442109a100856f9f493e61f9430ba161103e Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.278367 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.289652 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.310724 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.324636 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.336145 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.357418 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.370408 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.380222 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.380256 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.380265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.380277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.380286 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.391063 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.408923 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.482475 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.482518 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.482528 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.482546 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.482560 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.587315 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.587396 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.587407 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.587428 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.587441 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.689504 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.689542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.689550 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.689564 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.689573 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.719516 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:41:11.176850732 +0000 UTC Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.756259 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:13 crc kubenswrapper[4975]: E0122 10:37:13.756384 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.756918 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:13 crc kubenswrapper[4975]: E0122 10:37:13.756988 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.791598 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.791625 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.791632 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.791644 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.791653 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.895075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.895120 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.895132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.895149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.895159 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.966103 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zdgbb" event={"ID":"2f152e5a-dd64-45fc-9a8f-31652989056a","Type":"ContainerStarted","Data":"ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.966164 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zdgbb" event={"ID":"2f152e5a-dd64-45fc-9a8f-31652989056a","Type":"ContainerStarted","Data":"0f8fbe72b0837a2c27129135c28e442109a100856f9f493e61f9430ba161103e"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.969368 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.969405 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.969417 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.971655 4975 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b040-017f-4aa7-ae02-cda6b7a89026" containerID="02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796" exitCode=0 Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.971730 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerDied","Data":"02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796"} Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.986041 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.998566 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.998829 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.998836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.998849 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:13 crc kubenswrapper[4975]: I0122 10:37:13.998857 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:13Z","lastTransitionTime":"2026-01-22T10:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.000291 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.013456 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.022475 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.038846 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.055817 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.076497 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.103706 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.110622 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.110658 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.110666 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.110680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.110689 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.131567 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.156640 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.170299 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.184910 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.200653 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.213631 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.213678 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.213687 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.213706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.213717 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.215423 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.229912 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.249807 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.264636 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.277607 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.289196 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.303516 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.317035 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.317075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.317088 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.317105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.317118 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.318635 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.333280 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.356790 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.373194 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.391004 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.413178 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.419623 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.419666 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.419676 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.419694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.419711 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.433461 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.468128 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.482658 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.493167 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.507908 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.508011 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.508043 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.508079 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:37:22.508056237 +0000 UTC m=+35.166092347 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.508158 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.508165 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.508212 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:22.508198012 +0000 UTC m=+35.166234112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.508228 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:22.508222002 +0000 UTC m=+35.166258112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.522297 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.522353 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.522364 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.522377 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.522385 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.608833 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.608905 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609030 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609080 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609088 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609100 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609123 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609149 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609185 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:22.609158769 +0000 UTC m=+35.267194919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.609229 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:22.60920337 +0000 UTC m=+35.267239530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.624680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.624730 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.624746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.624769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.624787 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.720460 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:40:04.862532516 +0000 UTC Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.726715 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.726747 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.726755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.726767 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.726790 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.754322 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:14 crc kubenswrapper[4975]: E0122 10:37:14.754546 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.830063 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.830102 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.830111 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.830127 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.830137 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.933198 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.933240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.933252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.933268 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.933279 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:14Z","lastTransitionTime":"2026-01-22T10:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.980511 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.980572 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.983137 4975 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b040-017f-4aa7-ae02-cda6b7a89026" containerID="47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf" exitCode=0 Jan 22 10:37:14 crc kubenswrapper[4975]: I0122 10:37:14.983170 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerDied","Data":"47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.003731 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.018829 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.035929 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.036481 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.036625 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.036894 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.037153 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.037951 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.049848 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.060737 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.074430 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.092155 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.106622 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.130235 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.140745 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.140781 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.140791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.140804 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.140814 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.142950 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.155200 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.166813 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.177552 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.185722 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.196151 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.243433 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.243463 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.243471 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.243485 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.243495 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.345885 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.346121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.346275 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.346465 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.346606 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.456047 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.456121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.456146 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.456180 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.457419 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.560352 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.560393 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.560403 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.560427 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.560442 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.662510 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.662544 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.662555 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.662570 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.662581 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.720580 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:33:16.140558857 +0000 UTC Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.754061 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.754099 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:15 crc kubenswrapper[4975]: E0122 10:37:15.754243 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:15 crc kubenswrapper[4975]: E0122 10:37:15.754416 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.764683 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.764739 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.764758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.764781 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.764800 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.868187 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.868231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.868240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.868255 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.868264 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.971797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.971865 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.971882 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.971907 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.971926 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:15Z","lastTransitionTime":"2026-01-22T10:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:15 crc kubenswrapper[4975]: I0122 10:37:15.994556 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.000648 4975 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b040-017f-4aa7-ae02-cda6b7a89026" containerID="d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c" exitCode=0 Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.000715 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerDied","Data":"d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.018096 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.041824 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.054687 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.066576 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.074701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.074742 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.074751 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.074766 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.074777 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.077646 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.096548 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.109661 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.124272 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.139031 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.150929 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.161525 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.177648 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.177690 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.177702 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.177718 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.177762 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.193360 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.214499 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.231775 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.243590 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:16Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.280397 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.280462 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.280471 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.280484 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.280493 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.383103 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.383160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.383175 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.383200 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.383216 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.485865 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.485928 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.485951 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.485982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.486008 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.589454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.589518 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.589535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.589560 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.589576 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.692164 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.692216 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.692229 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.692244 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.692258 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.721769 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:15:12.28831653 +0000 UTC Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.754361 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:16 crc kubenswrapper[4975]: E0122 10:37:16.754568 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.795816 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.795885 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.795901 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.795928 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.795945 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.899158 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.899231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.899246 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.899271 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:16 crc kubenswrapper[4975]: I0122 10:37:16.899288 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:16Z","lastTransitionTime":"2026-01-22T10:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.002441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.002521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.002545 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.002619 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.002641 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.007896 4975 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b040-017f-4aa7-ae02-cda6b7a89026" containerID="d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc" exitCode=0 Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.007956 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerDied","Data":"d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.031236 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.051106 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.076282 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.106919 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.106957 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.106965 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.106978 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.106989 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.113003 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.133883 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.151799 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.166852 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.183242 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.201144 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.209173 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.209207 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.209219 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.209236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.209248 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.211420 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.224447 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.238381 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.252561 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.269233 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.284633 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.310833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.310869 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.310879 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.310919 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.310930 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.413874 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.413936 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.413955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.413981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.413999 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.556217 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.556292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.556309 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.556374 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.556392 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.597586 4975 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 10:37:17 crc kubenswrapper[4975]: E0122 10:37:17.598030 4975 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/events\": read tcp 38.102.83.51:35760->38.102.83.51:6443: use of closed network connection" event="&Event{ObjectMeta:{ovnkube-node-t5gcx.188d0748bba690aa openshift-ovn-kubernetes 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ovn-kubernetes,Name:ovnkube-node-t5gcx,UID:c1de894a-1cf5-47fa-9fc5-0a52783fe152,APIVersion:v1,ResourceVersion:26721,FieldPath:spec.containers{sbdb},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:37:17.593768106 +0000 UTC m=+30.251804246,LastTimestamp:2026-01-22 10:37:17.593768106 +0000 UTC m=+30.251804246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.659488 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.659576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.659601 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.659633 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.659658 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.722432 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:54:36.217235567 +0000 UTC Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.754897 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.755368 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:17 crc kubenswrapper[4975]: E0122 10:37:17.756444 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:17 crc kubenswrapper[4975]: E0122 10:37:17.757097 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.763166 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.763237 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.763255 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.763283 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.763302 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.775580 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.798398 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.815612 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.826183 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.854266 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.876028 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.876075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.876118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.876139 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.876151 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.889916 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.909656 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.923511 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.932151 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.955972 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.972982 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.979236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.979279 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.979291 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.979308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.979321 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:17Z","lastTransitionTime":"2026-01-22T10:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:17 crc kubenswrapper[4975]: I0122 10:37:17.987191 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.002996 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.015501 4975 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b040-017f-4aa7-ae02-cda6b7a89026" containerID="c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176" exitCode=0 Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.015558 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerDied","Data":"c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.024110 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.042806 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.054510 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.065548 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.076638 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.082433 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.082460 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.082468 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.082480 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.082489 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.084851 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.105955 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.121003 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.134212 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.151642 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.168314 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.185037 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.185085 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.185101 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.185123 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.185137 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.192595 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.206674 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.220254 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.235419 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.249226 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.264390 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.275056 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.287811 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.287851 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.287863 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.287882 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.287894 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.390838 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.391035 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.391119 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.391191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.391246 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.494144 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.494576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.494761 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.494947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.495080 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.598594 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.600198 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.600320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.600458 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.600555 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.703956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.704262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.704422 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.704548 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.704591 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.723265 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:45:51.08904203 +0000 UTC Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.753813 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:18 crc kubenswrapper[4975]: E0122 10:37:18.753986 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.807535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.807846 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.807930 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.808023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.808115 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.912018 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.912080 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.912096 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.912121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:18 crc kubenswrapper[4975]: I0122 10:37:18.912139 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:18Z","lastTransitionTime":"2026-01-22T10:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.015034 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.015100 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.015118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.015392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.015442 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.023722 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.029829 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" event={"ID":"c8f7b040-017f-4aa7-ae02-cda6b7a89026","Type":"ContainerStarted","Data":"2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.059169 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.072775 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.086084 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.105684 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.119017 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.119081 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.119100 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.119126 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.119144 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.122043 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.138924 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.150453 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.150592 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.150710 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.150859 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.150973 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.158428 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.167842 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.172540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.172680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.172864 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.172949 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.173050 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.174368 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.191572 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.196573 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.196608 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.196622 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.196640 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.196651 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.199859 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.214495 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.219154 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.219276 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.219445 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.219535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.219622 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.219940 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.230958 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.235105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.235162 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.235178 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.235201 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.235216 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.237409 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.250135 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.251268 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.251631 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.253243 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.253281 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.253292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.253309 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.253320 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.263573 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.277219 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.289613 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.356132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.356190 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.356206 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.356232 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.356247 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.460925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.461012 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.461039 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.461070 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.461103 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.564588 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.564883 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.565025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.565163 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.565318 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.668400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.668468 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.668486 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.668512 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.668530 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.723639 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:54:34.794780386 +0000 UTC Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.754222 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.754262 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.754496 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:19 crc kubenswrapper[4975]: E0122 10:37:19.754652 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.776697 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.776747 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.776764 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.776785 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.776803 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.879885 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.879926 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.879943 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.879967 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.879983 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.982922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.982979 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.982996 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.983025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:19 crc kubenswrapper[4975]: I0122 10:37:19.983042 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:19Z","lastTransitionTime":"2026-01-22T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.085709 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.085760 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.085771 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.085784 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.085794 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.206834 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.206924 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.206942 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.206966 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.207005 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.310158 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.310213 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.310225 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.310243 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.310260 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.413561 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.413635 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.413659 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.413690 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.413715 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.517456 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.517496 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.517512 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.517535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.517553 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.619165 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.619215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.619226 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.619240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.619250 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.722493 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.722794 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.722803 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.722817 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.722825 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.724795 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:31:21.982747861 +0000 UTC Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.754498 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:20 crc kubenswrapper[4975]: E0122 10:37:20.754774 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.825663 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.825705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.825716 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.825734 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.825745 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.928532 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.928565 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.928572 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.928585 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:20 crc kubenswrapper[4975]: I0122 10:37:20.928594 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:20Z","lastTransitionTime":"2026-01-22T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.031758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.031900 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.031920 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.031983 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.032004 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.135218 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.135297 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.135371 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.135406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.135432 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.237268 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.237379 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.237411 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.237436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.237478 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.340773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.340820 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.340831 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.340847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.340859 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.442666 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.442738 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.442755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.442780 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.442797 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.546186 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.546248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.546265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.546289 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.546306 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.649583 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.649654 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.649676 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.649707 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.649729 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.725264 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:39:37.824397994 +0000 UTC Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.753232 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.753292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.753311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.753364 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.753402 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.754152 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:21 crc kubenswrapper[4975]: E0122 10:37:21.754416 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.754514 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:21 crc kubenswrapper[4975]: E0122 10:37:21.754704 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.855810 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.855887 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.855900 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.855914 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.855925 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.959121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.959168 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.959182 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.959209 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:21 crc kubenswrapper[4975]: I0122 10:37:21.959225 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:21Z","lastTransitionTime":"2026-01-22T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.048477 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.049108 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.062490 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.062554 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.062571 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.062598 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.062619 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.089576 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.113219 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.132688 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.151776 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.165450 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.165505 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.165516 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.165538 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.165550 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.170787 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.188099 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.206217 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.222741 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.238855 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.272589 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.272680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.272707 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.272740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.272774 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.285845 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.306969 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.326572 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.345391 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.367073 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.375917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.376001 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.376021 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.376047 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.376065 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.383142 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.478997 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.479251 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.479427 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.479559 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.479715 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.531732 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.531937 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:37:38.531901902 +0000 UTC m=+51.189938052 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.532163 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.532266 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.532377 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:38.532327214 +0000 UTC m=+51.190363354 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.532544 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.532697 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.532795 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:38.532776696 +0000 UTC m=+51.190812846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.583664 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.583720 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.583743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.583772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.583794 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.609999 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.634301 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.634406 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634560 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634612 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634616 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634639 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634650 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634670 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634727 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:38.634699928 +0000 UTC m=+51.292736068 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.634757 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:38.63474414 +0000 UTC m=+51.292780290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.641988 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.674960 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.687246 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.687389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.687417 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.687449 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.687474 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.701713 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.723305 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.726452 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:21:31.08119485 +0000 UTC Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.746985 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.754720 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:22 crc kubenswrapper[4975]: E0122 10:37:22.755032 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.783051 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.789782 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.789963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.790030 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.790099 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.790164 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.804450 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.819895 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.837391 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.854323 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.872012 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.884808 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.892770 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.892833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.892852 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.892879 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.892911 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.898724 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.913040 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.929088 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.995748 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.995815 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.995846 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.995880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:22 crc kubenswrapper[4975]: I0122 10:37:22.995901 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:22Z","lastTransitionTime":"2026-01-22T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.052746 4975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.053381 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.090153 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.103734 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.103772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.103781 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.104114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.104139 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.108639 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.126784 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.139436 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.152862 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.181953 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.203952 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.209647 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.209696 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.209713 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.209736 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.209756 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.224097 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.245431 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.263379 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.297404 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.313457 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.313627 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.313687 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.313706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.313719 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.316028 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.329267 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.348113 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.361214 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.373242 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.416485 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.416541 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.416555 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.416578 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.416593 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.519413 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.519479 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.519497 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.519523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.519543 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.622213 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.622262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.622275 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.622293 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.622303 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.725181 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.725244 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.725256 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.725274 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.725289 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.727392 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:28:22.946065638 +0000 UTC Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.754492 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.754506 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:23 crc kubenswrapper[4975]: E0122 10:37:23.754671 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:23 crc kubenswrapper[4975]: E0122 10:37:23.754740 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.828238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.828295 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.828311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.828370 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.828395 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.905087 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k"] Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.906212 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.909996 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.911245 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.930682 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.930816 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.930845 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.930884 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.930917 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:23Z","lastTransitionTime":"2026-01-22T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.938089 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.947135 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d891ff8-56eb-4f71-bdb4-638cf165511f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.947220 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d891ff8-56eb-4f71-bdb4-638cf165511f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.947392 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzl8d\" (UniqueName: \"kubernetes.io/projected/7d891ff8-56eb-4f71-bdb4-638cf165511f-kube-api-access-hzl8d\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.947795 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d891ff8-56eb-4f71-bdb4-638cf165511f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.957810 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.973220 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:23 crc kubenswrapper[4975]: I0122 10:37:23.989864 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.006908 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.033429 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.033498 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.033517 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.033547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.033565 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.043197 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.049542 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d891ff8-56eb-4f71-bdb4-638cf165511f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.049633 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzl8d\" (UniqueName: \"kubernetes.io/projected/7d891ff8-56eb-4f71-bdb4-638cf165511f-kube-api-access-hzl8d\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.049673 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d891ff8-56eb-4f71-bdb4-638cf165511f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.049706 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d891ff8-56eb-4f71-bdb4-638cf165511f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.051387 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d891ff8-56eb-4f71-bdb4-638cf165511f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.051420 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d891ff8-56eb-4f71-bdb4-638cf165511f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.055710 4975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.057594 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d891ff8-56eb-4f71-bdb4-638cf165511f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.067771 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.071839 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.100405 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.114814 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzl8d\" (UniqueName: \"kubernetes.io/projected/7d891ff8-56eb-4f71-bdb4-638cf165511f-kube-api-access-hzl8d\") pod \"ovnkube-control-plane-749d76644c-zdg4k\" (UID: \"7d891ff8-56eb-4f71-bdb4-638cf165511f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.124255 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.136372 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.136426 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.136443 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.136465 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.136482 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.145047 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.154195 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.166733 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.176853 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.189205 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.199129 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.222126 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.228150 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.239198 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.239250 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.239266 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.239290 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.239312 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: W0122 10:37:24.250083 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d891ff8_56eb_4f71_bdb4_638cf165511f.slice/crio-e4a780673db686b40c46b300b04df0553b32bbad76dd859849c7e191f8ad98b8 WatchSource:0}: Error finding container e4a780673db686b40c46b300b04df0553b32bbad76dd859849c7e191f8ad98b8: Status 404 returned error can't find the container with id e4a780673db686b40c46b300b04df0553b32bbad76dd859849c7e191f8ad98b8 Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.341901 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.341951 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.341963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.341978 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.341989 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.444298 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.444520 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.444555 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.444583 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.444605 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.547055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.547118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.547135 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.547161 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.547179 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.650055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.650113 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.650129 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.650150 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.650167 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.728599 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:00:34.1218129 +0000 UTC Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.753789 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:24 crc kubenswrapper[4975]: E0122 10:37:24.753988 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.754156 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.754215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.754235 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.754262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.754281 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.858542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.858606 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.858621 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.858645 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.858661 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.961874 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.961924 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.961937 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.961958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:24 crc kubenswrapper[4975]: I0122 10:37:24.961974 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:24Z","lastTransitionTime":"2026-01-22T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.059223 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" event={"ID":"7d891ff8-56eb-4f71-bdb4-638cf165511f","Type":"ContainerStarted","Data":"da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.059264 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" event={"ID":"7d891ff8-56eb-4f71-bdb4-638cf165511f","Type":"ContainerStarted","Data":"e4a780673db686b40c46b300b04df0553b32bbad76dd859849c7e191f8ad98b8"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.060625 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/0.log" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.062367 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d" exitCode=1 Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.062384 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.062942 4975 scope.go:117] "RemoveContainer" containerID="4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.065237 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.065259 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.065267 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.065278 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.065287 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.082926 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.104020 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.115527 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.142298 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:24Z\\\",\\\"message\\\":\\\"val\\\\nI0122 10:37:23.468122 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:23.468158 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:37:23.468169 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:23.468191 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:23.468209 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:23.468242 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:23.468254 6311 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 10:37:23.468272 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:23.468275 6311 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 10:37:23.468301 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:37:23.468361 6311 factory.go:656] Stopping watch factory\\\\nI0122 10:37:23.468381 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:23.468351 6311 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:37:23.468410 6311 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:37:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.164852 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.168032 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.168126 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.168148 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.168180 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.168202 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.183777 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.199989 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.218580 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.244908 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.271058 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.271092 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.271102 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.271119 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.271129 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.283609 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.298313 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.312241 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.328995 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.344645 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.362546 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.373229 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.373270 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.373281 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.373300 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.373312 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.375660 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.434610 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-v5jbd"] Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.435108 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:25 crc kubenswrapper[4975]: E0122 10:37:25.435185 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.455955 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.463636 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.463701 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnp8t\" (UniqueName: \"kubernetes.io/projected/50be08c7-0f3c-47b2-b15c-94dea396c9e1-kube-api-access-cnp8t\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.476489 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.476561 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.476578 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.476602 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.476618 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.478858 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.495769 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.514359 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.527869 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.552738 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.564755 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.564811 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnp8t\" (UniqueName: \"kubernetes.io/projected/50be08c7-0f3c-47b2-b15c-94dea396c9e1-kube-api-access-cnp8t\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:25 crc kubenswrapper[4975]: E0122 10:37:25.564975 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:25 crc kubenswrapper[4975]: E0122 10:37:25.565113 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:26.065084581 +0000 UTC m=+38.723120691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.571133 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.578270 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.578307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.578316 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.578344 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.578354 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.583915 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnp8t\" (UniqueName: \"kubernetes.io/projected/50be08c7-0f3c-47b2-b15c-94dea396c9e1-kube-api-access-cnp8t\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.585571 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.596286 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.610148 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.622761 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.633987 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.651439 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.668603 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.680287 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.680605 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.680650 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.680661 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.680679 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.680692 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.704975 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:24Z\\\",\\\"message\\\":\\\"val\\\\nI0122 10:37:23.468122 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:23.468158 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:37:23.468169 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:23.468191 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:23.468209 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:23.468242 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:23.468254 6311 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 10:37:23.468272 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:23.468275 6311 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 10:37:23.468301 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:37:23.468361 6311 factory.go:656] Stopping watch factory\\\\nI0122 10:37:23.468381 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:23.468351 6311 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:37:23.468410 6311 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:37:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.721123 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.729128 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:50:31.00804589 +0000 UTC Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.754668 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.754692 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:25 crc kubenswrapper[4975]: E0122 10:37:25.754871 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:25 crc kubenswrapper[4975]: E0122 10:37:25.754972 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.783956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.784002 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.784010 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.784030 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.784041 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.886435 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.886479 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.886490 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.886510 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.886521 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.988635 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.988682 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.988692 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.988710 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:25 crc kubenswrapper[4975]: I0122 10:37:25.988720 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:25Z","lastTransitionTime":"2026-01-22T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.066949 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" event={"ID":"7d891ff8-56eb-4f71-bdb4-638cf165511f","Type":"ContainerStarted","Data":"813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.068686 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/0.log" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.069511 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:26 crc kubenswrapper[4975]: E0122 10:37:26.069660 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:26 crc kubenswrapper[4975]: E0122 10:37:26.069744 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:27.069727473 +0000 UTC m=+39.727763583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.071437 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.071992 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.087772 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.091160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.091195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.091203 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.091218 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.091227 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.101616 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.112656 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.144896 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:24Z\\\",\\\"message\\\":\\\"val\\\\nI0122 10:37:23.468122 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:23.468158 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:37:23.468169 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:23.468191 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:23.468209 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:23.468242 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:23.468254 6311 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 10:37:23.468272 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:23.468275 6311 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 10:37:23.468301 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:37:23.468361 6311 factory.go:656] Stopping watch factory\\\\nI0122 10:37:23.468381 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:23.468351 6311 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:37:23.468410 6311 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:37:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.165729 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.178925 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.189844 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.194166 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.194196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.194207 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.194224 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.194236 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.208513 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.221519 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.231646 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.254152 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.269278 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.281455 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.292232 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.296485 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.296537 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.296549 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.296564 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.296575 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.302562 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.311129 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.319443 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.335779 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.348366 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.358962 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.371234 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.381061 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.397854 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.398861 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.398951 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.398974 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.399004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.399027 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.408950 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.418927 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.438342 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:24Z\\\",\\\"message\\\":\\\"val\\\\nI0122 10:37:23.468122 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:23.468158 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:37:23.468169 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:23.468191 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:23.468209 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:23.468242 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:23.468254 6311 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 10:37:23.468272 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:23.468275 6311 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 10:37:23.468301 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:37:23.468361 6311 factory.go:656] Stopping watch factory\\\\nI0122 10:37:23.468381 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:23.468351 6311 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:37:23.468410 6311 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:37:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.461696 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.478812 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.492246 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.502064 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.502108 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.502121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.502139 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.502153 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.506534 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.518237 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.528784 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.546436 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.565874 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.604084 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.604143 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.604156 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.604171 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.604181 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.706689 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.706824 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.706851 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.706942 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.706967 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.729896 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:58:20.061546294 +0000 UTC Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.754381 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:26 crc kubenswrapper[4975]: E0122 10:37:26.754539 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.810848 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.810896 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.810915 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.810937 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.810954 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.914602 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.914663 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.914680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.914705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:26 crc kubenswrapper[4975]: I0122 10:37:26.914742 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:26Z","lastTransitionTime":"2026-01-22T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.019424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.019936 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.019959 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.019995 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.020020 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.081527 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:27 crc kubenswrapper[4975]: E0122 10:37:27.081737 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:27 crc kubenswrapper[4975]: E0122 10:37:27.081830 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:29.08180688 +0000 UTC m=+41.739843100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.086463 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/1.log" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.088669 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/0.log" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.095164 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc" exitCode=1 Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.095244 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.095327 4975 scope.go:117] "RemoveContainer" containerID="4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.096692 4975 scope.go:117] "RemoveContainer" containerID="5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc" Jan 22 10:37:27 crc kubenswrapper[4975]: E0122 10:37:27.097043 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.117507 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.123485 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.123537 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.123553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.123578 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.123596 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.138558 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.155899 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.187495 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:24Z\\\",\\\"message\\\":\\\"val\\\\nI0122 10:37:23.468122 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:23.468158 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:37:23.468169 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:23.468191 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:23.468209 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:23.468242 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:23.468254 6311 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 10:37:23.468272 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:23.468275 6311 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 10:37:23.468301 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:37:23.468361 6311 factory.go:656] Stopping watch factory\\\\nI0122 10:37:23.468381 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:23.468351 6311 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:37:23.468410 6311 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:37:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.210726 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.227015 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.227070 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.227090 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.227116 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.227137 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.232638 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.251899 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.272702 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.291492 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.307649 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.329968 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.330046 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.330068 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.330095 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.330117 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.343770 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.365983 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.385429 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.406101 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.430887 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.433932 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.433977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.433994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.434017 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.434034 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.449761 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.467177 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.537755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.537801 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.537818 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.537843 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.537861 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.641963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.642035 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.642053 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.642080 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.642103 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.730318 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:48:08.982824397 +0000 UTC Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.745083 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.745153 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.745172 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.745202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.745223 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.754523 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.754671 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:27 crc kubenswrapper[4975]: E0122 10:37:27.754722 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.754806 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:27 crc kubenswrapper[4975]: E0122 10:37:27.754940 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:27 crc kubenswrapper[4975]: E0122 10:37:27.755191 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.783080 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.817225 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.840053 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.847736 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.847827 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.847847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.847875 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.847930 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.870100 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.890808 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.911052 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.927760 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.951515 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.951606 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.951626 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.951651 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.951669 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:27Z","lastTransitionTime":"2026-01-22T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.959019 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:27 crc kubenswrapper[4975]: I0122 10:37:27.987579 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.010595 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.027520 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.054913 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.055007 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.055032 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.055066 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.055090 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.060517 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fac7e4dc12abfdb02a448bc3b377014bc2e7ec2f92291e9bedcad9cb393433d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:24Z\\\",\\\"message\\\":\\\"val\\\\nI0122 10:37:23.468122 6311 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:23.468158 6311 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:37:23.468152 6311 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:37:23.468169 6311 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:23.468191 6311 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:23.468209 6311 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:23.468242 6311 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:23.468254 6311 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 10:37:23.468272 6311 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:23.468275 6311 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 10:37:23.468301 6311 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:37:23.468361 6311 factory.go:656] Stopping watch factory\\\\nI0122 10:37:23.468381 6311 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:23.468351 6311 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:37:23.468410 6311 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:37:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.080669 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.102054 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/1.log" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.105086 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.110198 4975 scope.go:117] "RemoveContainer" containerID="5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc" Jan 22 10:37:28 crc kubenswrapper[4975]: E0122 10:37:28.110531 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.127456 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.148606 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.158501 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.158551 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.158572 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.158596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.158615 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.168674 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.203029 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.225686 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.246550 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.261776 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.261828 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.261848 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.261870 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.261891 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.264071 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.286325 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.306325 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.323993 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.347496 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.366593 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.366716 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.366743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.366775 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.366814 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.374512 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.391486 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.423785 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.446219 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.470100 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.470667 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.470725 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.470744 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.470769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.470789 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.490018 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.509922 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.526643 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.550182 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.572840 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.572929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.572951 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.572981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.573004 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.675993 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.676069 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.676090 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.676114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.676130 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.731377 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:05:06.178525708 +0000 UTC Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.754906 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:28 crc kubenswrapper[4975]: E0122 10:37:28.755171 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.779314 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.779402 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.779422 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.779445 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.779463 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.882665 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.882721 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.882738 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.882761 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.882779 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.986192 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.986241 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.986252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.986270 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:28 crc kubenswrapper[4975]: I0122 10:37:28.986283 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:28Z","lastTransitionTime":"2026-01-22T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.089814 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.089867 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.089881 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.089898 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.089912 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.101768 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.101928 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.102028 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:33.102002558 +0000 UTC m=+45.760038678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.193599 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.193645 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.193694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.193714 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.193726 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.297248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.297321 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.297374 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.297406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.297428 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.399894 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.399993 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.400003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.400026 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.400041 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.483833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.483929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.483947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.483975 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.483991 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.505444 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.510562 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.510624 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.510636 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.510661 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.510677 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.529808 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.535467 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.535525 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.535544 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.535568 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.535588 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.566007 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.571044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.571095 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.571113 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.571137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.571154 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.590089 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.594731 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.594784 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.594801 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.594826 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.594843 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.619674 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.619914 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.622224 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.622289 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.622311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.622362 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.622381 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.725628 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.725718 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.725743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.725773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.725791 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.731880 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:17:02.707724147 +0000 UTC Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.754283 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.754407 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.754498 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.754507 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.755241 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:29 crc kubenswrapper[4975]: E0122 10:37:29.755424 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.828519 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.828580 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.828597 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.828619 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.828635 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.930912 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.930959 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.930975 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.930996 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:29 crc kubenswrapper[4975]: I0122 10:37:29.931013 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:29Z","lastTransitionTime":"2026-01-22T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.034388 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.034443 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.034458 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.034481 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.034498 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.137517 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.137694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.137732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.137765 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.137789 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.240650 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.240732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.240750 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.240775 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.240793 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.343873 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.343946 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.343963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.343989 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.344007 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.446868 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.446929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.446945 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.446970 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.446989 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.550237 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.550300 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.550318 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.550369 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.550386 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.654059 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.654142 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.654165 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.654193 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.654215 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.732852 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:17:52.840914012 +0000 UTC Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.754306 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:30 crc kubenswrapper[4975]: E0122 10:37:30.754579 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.758474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.758543 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.758566 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.758596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.758617 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.861970 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.862095 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.862118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.862195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.862218 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.966775 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.966904 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.966934 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.966971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:30 crc kubenswrapper[4975]: I0122 10:37:30.966992 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:30Z","lastTransitionTime":"2026-01-22T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.069923 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.069981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.070000 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.070025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.070043 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.173918 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.173988 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.174005 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.174032 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.174052 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.276808 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.276872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.276890 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.276917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.276933 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.380240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.380307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.380320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.380364 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.380378 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.483297 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.483398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.483415 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.483441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.483456 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.586567 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.586637 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.586660 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.586695 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.586714 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.689683 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.689728 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.689751 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.689778 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.689793 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.733549 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:15:26.499770446 +0000 UTC Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.755566 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:31 crc kubenswrapper[4975]: E0122 10:37:31.755871 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.756427 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.756535 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:31 crc kubenswrapper[4975]: E0122 10:37:31.757369 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:31 crc kubenswrapper[4975]: E0122 10:37:31.757584 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.792905 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.792960 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.792977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.793001 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.793019 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.896258 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.896395 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.896422 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.896451 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:31 crc kubenswrapper[4975]: I0122 10:37:31.896474 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:31.999433 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:31.999497 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:31.999519 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:31.999547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:31.999569 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:31Z","lastTransitionTime":"2026-01-22T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.102521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.102586 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.102604 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.102630 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.102647 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.204797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.204847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.204857 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.204875 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.204886 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.307997 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.308064 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.308087 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.308114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.308135 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.411664 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.411769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.411798 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.411830 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.411851 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.514596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.514694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.514712 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.514734 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.514752 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.617811 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.617878 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.617899 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.617931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.617954 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.721389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.721492 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.721516 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.721545 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.721569 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.733874 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:21:52.43870312 +0000 UTC Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.754471 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:32 crc kubenswrapper[4975]: E0122 10:37:32.754652 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.824695 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.824753 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.824775 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.824803 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.824824 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.927620 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.927692 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.927714 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.927740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:32 crc kubenswrapper[4975]: I0122 10:37:32.927759 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:32Z","lastTransitionTime":"2026-01-22T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.030906 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.030969 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.030985 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.031008 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.031026 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.132951 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.133020 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.133040 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.133070 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.133089 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.148937 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:33 crc kubenswrapper[4975]: E0122 10:37:33.149164 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:33 crc kubenswrapper[4975]: E0122 10:37:33.149252 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:41.149227589 +0000 UTC m=+53.807263739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.236915 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.236992 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.237015 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.237043 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.237066 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.340290 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.340421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.340446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.340474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.340496 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.443986 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.444079 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.444114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.444147 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.444168 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.547359 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.547421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.547460 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.547494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.547515 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.650694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.650749 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.650763 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.650781 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.650795 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.734764 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:45:26.518622185 +0000 UTC Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.754086 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:33 crc kubenswrapper[4975]: E0122 10:37:33.754218 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.754277 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.754294 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:33 crc kubenswrapper[4975]: E0122 10:37:33.754419 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:33 crc kubenswrapper[4975]: E0122 10:37:33.754557 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.755090 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.755236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.755265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.755408 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.755504 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.859473 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.859532 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.859542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.859564 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.859579 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.962145 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.962190 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.962198 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.962212 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:33 crc kubenswrapper[4975]: I0122 10:37:33.962221 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:33Z","lastTransitionTime":"2026-01-22T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.066997 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.067054 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.067082 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.067112 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.067131 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.170906 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.170967 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.170982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.171080 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.171109 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.275137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.275462 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.275574 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.275669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.275745 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.379179 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.379253 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.379270 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.379294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.379315 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.481987 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.482061 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.482081 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.482110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.482134 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.585231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.585283 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.585295 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.585319 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.585374 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.688373 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.688463 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.688482 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.688518 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.688537 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.735418 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:29:17.126541086 +0000 UTC Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.754232 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:34 crc kubenswrapper[4975]: E0122 10:37:34.754610 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.792141 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.792202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.792220 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.792247 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.792265 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.895533 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.895607 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.895626 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.895658 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.895677 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.999077 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.999139 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.999160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.999190 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:34 crc kubenswrapper[4975]: I0122 10:37:34.999211 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:34Z","lastTransitionTime":"2026-01-22T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.102116 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.102205 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.102226 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.102256 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.102279 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.205974 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.206029 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.206046 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.206069 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.206086 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.309114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.309161 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.309179 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.309200 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.309217 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.413399 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.413472 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.413496 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.413530 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.413554 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.516806 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.516864 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.516894 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.516921 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.516938 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.619881 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.619941 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.619957 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.619982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.620001 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.722440 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.722507 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.722525 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.722553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.722570 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.735900 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:48:20.010566728 +0000 UTC Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.753989 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.754100 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.754019 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:35 crc kubenswrapper[4975]: E0122 10:37:35.754176 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:35 crc kubenswrapper[4975]: E0122 10:37:35.754391 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:35 crc kubenswrapper[4975]: E0122 10:37:35.754572 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.825577 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.825642 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.825669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.825702 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.825726 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.928980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.929099 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.929122 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.929152 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:35 crc kubenswrapper[4975]: I0122 10:37:35.929175 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:35Z","lastTransitionTime":"2026-01-22T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.032732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.032800 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.032821 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.032847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.032865 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.136084 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.136149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.136168 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.136191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.136215 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.241169 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.241254 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.241277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.241313 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.241389 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.345665 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.345738 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.345761 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.345793 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.345815 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.448499 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.448683 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.448705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.448729 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.448749 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.552053 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.552112 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.552131 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.552155 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.552173 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.655539 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.655599 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.655615 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.655641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.655657 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.736428 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:58:40.868737408 +0000 UTC Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.754815 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:36 crc kubenswrapper[4975]: E0122 10:37:36.754988 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.758838 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.758934 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.758956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.758978 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.759037 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.862400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.862474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.862497 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.862526 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.862549 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.965441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.965506 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.965523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.965547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:36 crc kubenswrapper[4975]: I0122 10:37:36.965565 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:36Z","lastTransitionTime":"2026-01-22T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.068416 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.068477 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.068536 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.068564 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.068583 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.171438 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.171562 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.171575 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.171591 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.171604 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.275019 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.275101 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.275121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.275145 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.275162 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.378038 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.378117 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.378137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.378164 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.378185 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.480956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.481046 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.481078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.481112 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.481133 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.584870 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.584931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.584949 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.584975 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.584993 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.688161 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.688280 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.688305 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.688380 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.688401 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.736782 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:32:59.209187388 +0000 UTC Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.753920 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.754044 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:37 crc kubenswrapper[4975]: E0122 10:37:37.754371 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.754407 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:37 crc kubenswrapper[4975]: E0122 10:37:37.754477 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:37 crc kubenswrapper[4975]: E0122 10:37:37.754668 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.783687 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.791388 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.791450 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.791474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.791507 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.791531 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.809569 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.827723 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.863532 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.889536 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.897043 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.897105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.897124 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.897149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.897170 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:37Z","lastTransitionTime":"2026-01-22T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.920858 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.941430 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.962833 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:37 crc kubenswrapper[4975]: I0122 10:37:37.985771 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:37Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.002433 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.004856 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.005148 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.005289 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.005317 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.005370 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.040264 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.061948 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.080885 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.101922 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.108142 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.108219 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.108238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.108265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.108283 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.122363 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.139277 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.155154 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:38Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.210688 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.210814 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.210837 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.210861 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.210880 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.313722 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.313769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.313784 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.313805 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.313821 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.417022 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.417091 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.417108 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.417136 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.417155 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.519880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.520111 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.520147 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.520178 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.520202 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.612896 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.613091 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.613153 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.613289 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:38:10.613262209 +0000 UTC m=+83.271298359 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.613318 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.613621 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:38:10.613591288 +0000 UTC m=+83.271627438 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.613463 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.613927 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:38:10.613903456 +0000 UTC m=+83.271939606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.624914 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.624961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.624979 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.625003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.625020 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.714818 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.714977 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715042 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715083 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715108 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715206 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:38:10.715170212 +0000 UTC m=+83.373206362 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715208 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715250 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715276 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.715423 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:38:10.715394988 +0000 UTC m=+83.373431158 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.727902 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.727959 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.727976 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.727999 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.728015 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.737785 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:47:30.951812435 +0000 UTC Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.754527 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:38 crc kubenswrapper[4975]: E0122 10:37:38.754721 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.831645 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.831743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.831762 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.831787 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.831804 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.935201 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.935268 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.935287 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.935320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:38 crc kubenswrapper[4975]: I0122 10:37:38.935390 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:38Z","lastTransitionTime":"2026-01-22T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.039634 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.039753 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.039772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.039800 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.039817 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.142644 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.142699 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.142718 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.142743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.142761 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.245775 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.245830 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.245844 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.245865 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.245881 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.348772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.348859 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.348885 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.348916 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.348948 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.453132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.453196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.453214 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.453238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.453258 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.556582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.556639 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.556655 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.556679 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.556696 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.627611 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.627662 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.627679 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.627701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.627719 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.647977 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.653025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.653089 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.653109 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.653133 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.653150 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.675554 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.681037 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.681072 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.681085 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.681106 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.681122 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.701581 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.707169 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.707256 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.707272 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.707294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.707309 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.728102 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.734219 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.734292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.734310 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.734372 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.734401 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.738531 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:00:06.775974631 +0000 UTC Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.753963 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.754070 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.754220 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.754879 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.755007 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.755109 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.756483 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:39 crc kubenswrapper[4975]: E0122 10:37:39.756737 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.759640 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.759731 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.759792 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.759857 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.759969 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.864038 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.864537 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.864607 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.864675 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.864737 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.968461 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.968551 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.968580 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.968614 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:39 crc kubenswrapper[4975]: I0122 10:37:39.968639 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:39Z","lastTransitionTime":"2026-01-22T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.072422 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.072487 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.072510 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.072541 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.072566 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.087708 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.103192 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.114963 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.133759 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.153542 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.171534 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.176154 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.176196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.176212 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.176233 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.176247 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.210742 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.274742 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.279500 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.279553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.279565 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.279586 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.279599 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.291224 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.306762 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.337877 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.357361 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.379582 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.382323 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.382387 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.382400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.382417 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.382430 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.401619 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.423102 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.440198 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.457566 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.481578 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.486521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.486581 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.486598 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.486621 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.486639 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.499856 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.589923 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.590294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.590432 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.590504 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.590568 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.694606 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.694686 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.694704 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.694732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.694751 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.739001 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:27:37.26514545 +0000 UTC Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.754690 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:40 crc kubenswrapper[4975]: E0122 10:37:40.754925 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.797962 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.798011 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.798028 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.798053 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.798070 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.900893 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.901023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.901044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.901071 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:40 crc kubenswrapper[4975]: I0122 10:37:40.901092 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:40Z","lastTransitionTime":"2026-01-22T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.004007 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.004065 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.004082 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.004105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.004120 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.106699 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.106762 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.106773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.106791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.106805 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.169377 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:41 crc kubenswrapper[4975]: E0122 10:37:41.169577 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:41 crc kubenswrapper[4975]: E0122 10:37:41.170065 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:37:57.170036531 +0000 UTC m=+69.828072681 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.209716 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.209974 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.210115 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.210254 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.210417 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.314576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.314833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.314961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.315132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.315285 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.418252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.418313 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.418357 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.418383 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.418400 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.521392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.521446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.521454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.521470 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.521478 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.624576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.624710 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.624729 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.624754 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.624770 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.727782 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.728128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.728327 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.728553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.728718 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.739490 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:58:08.715875351 +0000 UTC Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.753792 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.753839 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.753820 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:41 crc kubenswrapper[4975]: E0122 10:37:41.753973 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:41 crc kubenswrapper[4975]: E0122 10:37:41.754071 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:41 crc kubenswrapper[4975]: E0122 10:37:41.754183 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.832386 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.832459 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.832478 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.832503 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.832521 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.935724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.935799 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.935821 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.935851 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:41 crc kubenswrapper[4975]: I0122 10:37:41.935871 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:41Z","lastTransitionTime":"2026-01-22T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.038421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.038506 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.038529 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.038555 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.038572 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.142274 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.142374 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.142393 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.142421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.142436 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.245697 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.245748 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.245765 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.245789 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.245806 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.348663 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.348724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.348740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.348766 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.348785 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.453269 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.453320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.453366 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.453392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.453411 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.556289 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.556407 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.556426 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.556450 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.556468 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.659820 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.659889 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.659906 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.659932 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.659951 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.740457 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:48:01.054211169 +0000 UTC Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.754680 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:42 crc kubenswrapper[4975]: E0122 10:37:42.754933 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.755224 4975 scope.go:117] "RemoveContainer" containerID="5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.766662 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.766967 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.766983 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.767008 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.767024 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.869963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.870019 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.870031 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.870059 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.870073 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.974067 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.974110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.974121 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.974137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:42 crc kubenswrapper[4975]: I0122 10:37:42.974147 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:42Z","lastTransitionTime":"2026-01-22T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.077021 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.077071 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.077093 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.077119 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.077140 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.166484 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/1.log" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.170562 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.171621 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.179064 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.179115 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.179132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.179157 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.179174 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.189414 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.211731 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.230365 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.251712 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.273770 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.282075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.282147 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.282170 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.282199 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.282221 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.297433 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.314284 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.340496 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.359075 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.382462 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.384653 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.384740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.384764 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.384797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.384821 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.405683 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.421195 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.433972 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.453274 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.467048 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.480941 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.487833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.487890 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.487908 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.487933 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.487948 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.500028 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.519642 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.593134 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.593184 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.593200 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.593225 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.593240 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.695312 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.695398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.695415 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.695437 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.695454 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.741430 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:23:31.129331885 +0000 UTC Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.754864 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.754936 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:43 crc kubenswrapper[4975]: E0122 10:37:43.754998 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:43 crc kubenswrapper[4975]: E0122 10:37:43.755072 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.755236 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:43 crc kubenswrapper[4975]: E0122 10:37:43.755311 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.798730 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.798783 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.798795 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.798814 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.798827 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.901769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.901813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.901832 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.901851 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:43 crc kubenswrapper[4975]: I0122 10:37:43.901865 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:43Z","lastTransitionTime":"2026-01-22T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.004770 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.004851 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.004871 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.004905 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.004931 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.108541 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.108600 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.108619 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.108643 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.108664 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.177949 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/2.log" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.179285 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/1.log" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.184755 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814" exitCode=1 Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.184829 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.185218 4975 scope.go:117] "RemoveContainer" containerID="5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.186323 4975 scope.go:117] "RemoveContainer" containerID="f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814" Jan 22 10:37:44 crc kubenswrapper[4975]: E0122 10:37:44.186874 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.209751 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.212395 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.212449 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.212462 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.212482 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.212495 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.229241 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.249186 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.266467 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.281313 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.296589 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.315795 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.315893 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.315916 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.316062 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.316154 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.328083 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.346997 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.362407 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.381859 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.395629 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.409730 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.418848 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.419098 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.419197 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.419304 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.419411 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.423573 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.442774 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.462038 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.477839 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.505108 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8ea41db31401359937628d6936edc493213b58f043b2ee52bb929d409357fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049466 6482 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049497 6482 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049587 6482 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:37:26.049736 6482 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:37:26.049956 6482 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050089 6482 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 10:37:26.050406 6482 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.523175 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.523226 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.523238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.523259 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.523271 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.525763 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:44Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.626222 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.626297 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.626317 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.626388 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.626414 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.730413 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.730468 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.730482 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.730502 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.730518 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.741893 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:39:43.333535223 +0000 UTC Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.754449 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:44 crc kubenswrapper[4975]: E0122 10:37:44.754646 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.833016 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.833527 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.833673 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.833809 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.833920 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.938591 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.938871 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.939202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.939350 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:44 crc kubenswrapper[4975]: I0122 10:37:44.939586 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:44Z","lastTransitionTime":"2026-01-22T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.042833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.043191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.043287 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.043438 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.043571 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.148973 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.149044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.149055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.149078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.149093 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.193531 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/2.log" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.199633 4975 scope.go:117] "RemoveContainer" containerID="f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814" Jan 22 10:37:45 crc kubenswrapper[4975]: E0122 10:37:45.199871 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.217845 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.243814 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.252236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.252294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.252316 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.252389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.252409 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.263420 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.280652 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.297648 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.314926 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.335259 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.355451 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.356600 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.356671 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.356691 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.356720 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.356745 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.381202 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.403520 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.437856 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.459532 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.460321 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.460585 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.460616 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.460651 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.460677 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.479159 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.501361 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.521258 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.538879 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.558800 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.563722 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.563774 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.563790 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.563813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.563826 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.579890 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:45Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.666961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.667026 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.667044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.667068 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.667085 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.742838 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:25:49.793217269 +0000 UTC Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.754390 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.754470 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:45 crc kubenswrapper[4975]: E0122 10:37:45.754616 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.754646 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:45 crc kubenswrapper[4975]: E0122 10:37:45.754780 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:45 crc kubenswrapper[4975]: E0122 10:37:45.755050 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.769813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.769878 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.769897 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.769922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.769942 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.873321 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.873387 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.873399 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.873418 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.873431 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.977232 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.977313 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.977362 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.977389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:45 crc kubenswrapper[4975]: I0122 10:37:45.977406 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:45Z","lastTransitionTime":"2026-01-22T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.079816 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.079887 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.079910 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.079941 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.079966 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.183249 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.183360 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.183379 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.183408 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.183427 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.286585 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.286669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.286697 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.286728 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.286749 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.389486 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.389546 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.389562 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.389587 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.389606 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.493293 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.493380 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.493398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.493421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.493438 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.596516 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.596588 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.596607 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.596634 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.596652 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.699910 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.700036 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.700060 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.700112 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.700136 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.743498 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:12:25.737915219 +0000 UTC Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.754827 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:46 crc kubenswrapper[4975]: E0122 10:37:46.754989 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.802626 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.802691 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.802707 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.802733 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.802752 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.905500 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.905585 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.905613 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.905641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:46 crc kubenswrapper[4975]: I0122 10:37:46.905658 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:46Z","lastTransitionTime":"2026-01-22T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.008920 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.008987 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.009004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.009029 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.009047 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.112637 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.112701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.112718 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.112743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.112764 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.215880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.215976 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.215998 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.216023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.216039 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.319270 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.319392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.319421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.319454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.319477 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.421938 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.422009 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.422031 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.422061 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.422085 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.525633 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.525705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.525726 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.525776 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.525800 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.629194 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.629259 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.629283 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.629311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.629370 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.733186 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.733266 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.733284 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.733309 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.733327 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.744460 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:39:01.810270663 +0000 UTC Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.754431 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.754449 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.754626 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:47 crc kubenswrapper[4975]: E0122 10:37:47.754836 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:47 crc kubenswrapper[4975]: E0122 10:37:47.755022 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:47 crc kubenswrapper[4975]: E0122 10:37:47.755170 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.791120 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.812629 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.830139 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.836464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.836494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.836508 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.836526 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.836540 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.850913 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.869956 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.887481 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.909180 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.929312 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.939701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.939755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.939773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.939796 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.939814 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:47Z","lastTransitionTime":"2026-01-22T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.946488 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:47 crc kubenswrapper[4975]: I0122 10:37:47.978699 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.001568 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.023400 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.043764 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.043854 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.043872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.043921 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.043938 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.044549 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.066505 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.085327 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.102987 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.127521 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.147019 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.147320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.147396 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.147415 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.147441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.147460 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.250223 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.250300 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.250321 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.250391 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.250416 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.353428 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.353480 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.353497 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.353521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.353539 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.456895 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.456959 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.456977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.457003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.457021 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.559833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.559893 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.559912 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.559936 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.559956 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.662745 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.662814 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.662836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.662867 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.662893 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.745176 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:47:01.300246883 +0000 UTC Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.754597 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:48 crc kubenswrapper[4975]: E0122 10:37:48.754848 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.766379 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.766424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.766436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.766454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.766466 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.870387 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.870446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.870458 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.870480 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.870511 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.973865 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.973929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.973950 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.973996 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:48 crc kubenswrapper[4975]: I0122 10:37:48.974018 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:48Z","lastTransitionTime":"2026-01-22T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.076411 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.076470 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.076488 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.076512 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.076529 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.179227 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.179300 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.179326 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.179394 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.179420 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.281902 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.281963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.281980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.282004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.282022 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.384595 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.384665 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.384682 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.384706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.384727 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.488141 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.488220 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.488238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.488263 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.488282 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.591160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.591240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.591258 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.591282 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.591300 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.693730 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.693796 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.693812 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.693836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.693853 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.745933 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:51:09.892677206 +0000 UTC Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.754418 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.754550 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:49 crc kubenswrapper[4975]: E0122 10:37:49.754612 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.754550 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:49 crc kubenswrapper[4975]: E0122 10:37:49.754768 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:49 crc kubenswrapper[4975]: E0122 10:37:49.754813 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.797078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.797138 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.797155 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.797213 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.797280 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.900022 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.900111 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.900152 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.900187 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.900212 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.934796 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.934850 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.934866 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.934890 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.934906 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: E0122 10:37:49.951051 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.956107 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.956170 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.956184 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.956205 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.956221 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: E0122 10:37:49.974768 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.980126 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.980185 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.980205 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.980228 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:49 crc kubenswrapper[4975]: I0122 10:37:49.980245 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:49Z","lastTransitionTime":"2026-01-22T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:49 crc kubenswrapper[4975]: E0122 10:37:49.997448 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.003017 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.003107 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.003542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.003629 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.003895 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: E0122 10:37:50.024978 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.032756 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.032784 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.032792 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.032806 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.032816 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: E0122 10:37:50.050396 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:50 crc kubenswrapper[4975]: E0122 10:37:50.050498 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.051391 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.051408 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.051415 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.051424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.051431 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.154652 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.154722 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.154744 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.154772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.154795 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.257362 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.257420 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.257438 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.257459 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.257479 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.360469 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.360526 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.360543 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.360570 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.360588 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.463758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.463830 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.463841 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.463882 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.463894 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.567830 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.567911 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.567928 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.567961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.567979 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.671293 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.671383 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.671400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.671424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.671444 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.746383 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:11:39.900168806 +0000 UTC Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.754894 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:50 crc kubenswrapper[4975]: E0122 10:37:50.755103 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.774571 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.774648 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.774668 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.774701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.774722 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.877829 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.877881 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.877893 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.877910 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.877928 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.981111 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.981177 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.981195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.981227 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:50 crc kubenswrapper[4975]: I0122 10:37:50.981244 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:50Z","lastTransitionTime":"2026-01-22T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.085912 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.085959 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.085972 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.085992 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.086005 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.188348 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.188398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.188410 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.188429 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.188442 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.292158 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.292252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.292276 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.292308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.292377 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.394614 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.394669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.394685 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.394707 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.394724 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.497257 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.497310 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.497325 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.497361 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.497375 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.600659 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.600744 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.600763 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.600791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.600810 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.703947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.703982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.703990 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.704003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.704012 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.747296 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:39:10.571243346 +0000 UTC Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.754692 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.754692 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.754842 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:51 crc kubenswrapper[4975]: E0122 10:37:51.754989 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:51 crc kubenswrapper[4975]: E0122 10:37:51.755063 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:51 crc kubenswrapper[4975]: E0122 10:37:51.755140 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.806465 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.806508 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.806523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.806547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.806563 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.909638 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.909700 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.909722 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.909752 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:51 crc kubenswrapper[4975]: I0122 10:37:51.909770 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:51Z","lastTransitionTime":"2026-01-22T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.013165 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.013246 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.013265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.013293 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.013311 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.116543 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.116622 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.116647 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.116676 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.116699 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.220520 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.220594 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.220610 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.220634 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.220648 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.323442 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.323494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.323514 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.323537 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.323553 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.426734 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.426783 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.426795 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.426818 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.426834 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.529451 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.529541 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.529556 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.529582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.529599 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.632724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.632774 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.632786 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.632805 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.632817 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.742749 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.742797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.742807 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.742828 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.742840 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.751358 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:42:48.543795362 +0000 UTC Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.754734 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:52 crc kubenswrapper[4975]: E0122 10:37:52.754911 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.844807 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.844840 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.844849 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.844862 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.844873 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.947151 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.947201 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.947212 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.947228 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:52 crc kubenswrapper[4975]: I0122 10:37:52.947240 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:52Z","lastTransitionTime":"2026-01-22T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.049541 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.049581 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.049594 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.049610 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.049624 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.152236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.152277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.152288 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.152303 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.152318 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.253914 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.253953 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.253965 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.253981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.253992 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.356696 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.356846 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.356870 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.356901 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.356922 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.459813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.459879 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.459897 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.459929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.459946 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.563010 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.563073 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.563090 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.563118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.563135 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.667300 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.667385 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.667400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.667421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.667439 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.752181 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:31:25.891467238 +0000 UTC Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.754641 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.754815 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:53 crc kubenswrapper[4975]: E0122 10:37:53.754905 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:53 crc kubenswrapper[4975]: E0122 10:37:53.755087 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.755262 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:53 crc kubenswrapper[4975]: E0122 10:37:53.755422 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.769319 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.769382 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.769392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.769406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.769417 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.871189 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.871252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.871268 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.871718 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.871773 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.975174 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.975232 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.975249 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.975276 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:53 crc kubenswrapper[4975]: I0122 10:37:53.975292 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:53Z","lastTransitionTime":"2026-01-22T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.077944 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.077993 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.078009 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.078034 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.078052 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.180682 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.180745 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.180765 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.180790 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.180808 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.283124 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.283174 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.283195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.283215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.283230 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.386193 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.386250 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.386261 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.386279 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.386291 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.489044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.489097 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.489113 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.489135 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.489153 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.592537 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.592629 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.592655 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.592689 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.592712 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.694638 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.694708 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.694726 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.694751 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.694765 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.753216 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:30:50.41173657 +0000 UTC Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.754451 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:54 crc kubenswrapper[4975]: E0122 10:37:54.754604 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.801127 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.801180 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.801196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.801216 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.801235 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.903720 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.903780 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.903798 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.903822 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:54 crc kubenswrapper[4975]: I0122 10:37:54.903839 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:54Z","lastTransitionTime":"2026-01-22T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.005966 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.006005 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.006019 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.006036 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.006049 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.108027 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.108068 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.108085 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.108108 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.108124 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.211569 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.211674 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.211694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.212365 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.212438 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.316092 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.316138 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.316149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.316169 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.316179 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.419788 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.419916 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.419943 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.419977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.420004 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.522983 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.523042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.523054 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.523067 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.523077 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.626318 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.626378 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.626387 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.626401 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.626412 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.728848 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.728903 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.728920 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.728943 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.728962 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.753569 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:51:01.451076481 +0000 UTC Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.754089 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.754127 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.754087 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:55 crc kubenswrapper[4975]: E0122 10:37:55.754221 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:55 crc kubenswrapper[4975]: E0122 10:37:55.754316 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:55 crc kubenswrapper[4975]: E0122 10:37:55.754428 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.765524 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.831793 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.831835 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.831846 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.831862 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.831874 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.935224 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.935297 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.935308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.935325 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:55 crc kubenswrapper[4975]: I0122 10:37:55.935355 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:55Z","lastTransitionTime":"2026-01-22T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.038037 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.038120 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.038148 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.038179 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.038205 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.140925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.140962 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.140971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.140988 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.140998 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.243280 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.243381 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.243402 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.243464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.243485 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.346022 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.346218 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.346392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.346758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.346904 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.450381 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.450427 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.450436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.450453 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.450463 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.557970 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.558015 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.558023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.558040 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.558053 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.661278 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.661372 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.661390 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.661416 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.661437 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.753899 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:46:25.406003312 +0000 UTC Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.754039 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:56 crc kubenswrapper[4975]: E0122 10:37:56.754164 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.763856 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.763913 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.763932 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.763956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.763972 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.866239 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.866285 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.866296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.866311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.866320 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.969045 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.969105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.969116 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.969135 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:56 crc kubenswrapper[4975]: I0122 10:37:56.969149 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:56Z","lastTransitionTime":"2026-01-22T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.072299 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.072390 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.072404 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.072423 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.072445 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.175506 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.175579 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.175597 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.175625 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.175643 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.190854 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:57 crc kubenswrapper[4975]: E0122 10:37:57.191071 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:57 crc kubenswrapper[4975]: E0122 10:37:57.191198 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:38:29.191170009 +0000 UTC m=+101.849206159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.278243 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.278305 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.278317 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.278357 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.278368 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.380868 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.380917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.380931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.380947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.381285 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.482511 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.482532 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.482542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.482554 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.482563 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.584941 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.584990 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.585004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.585023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.585038 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.687078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.687137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.687155 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.687180 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.687201 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.754256 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:36:41.753294696 +0000 UTC Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.754451 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.754593 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:57 crc kubenswrapper[4975]: E0122 10:37:57.754745 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.754777 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:57 crc kubenswrapper[4975]: E0122 10:37:57.754894 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:57 crc kubenswrapper[4975]: E0122 10:37:57.755064 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.769159 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.786460 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.789245 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.789293 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.789361 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.789385 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.789399 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.798696 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.815251 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.828927 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.841138 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.860966 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.872468 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.895873 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.895902 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.895909 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.895922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.895931 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.905450 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.930457 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.942645 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.952200 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.971832 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.985448 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.997086 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.998298 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.998347 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.998358 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.998373 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:57 crc kubenswrapper[4975]: I0122 10:37:57.998383 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:57Z","lastTransitionTime":"2026-01-22T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.009110 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.021658 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.031900 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.041878 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.100679 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.100737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.100747 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.100759 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.100768 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.202403 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.202509 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.202529 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.202554 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.202572 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.305084 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.305157 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.305181 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.305214 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.305238 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.407262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.407310 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.407321 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.407349 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.407359 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.510123 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.510187 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.510205 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.510229 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.510247 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.612621 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.612674 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.612692 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.612716 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.612732 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.715693 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.715743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.715756 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.715772 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.715783 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.754597 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:48:04.335110926 +0000 UTC Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.754799 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:37:58 crc kubenswrapper[4975]: E0122 10:37:58.754941 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.818238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.818302 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.818321 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.818377 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.818395 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.920653 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.920987 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.920999 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.921015 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:58 crc kubenswrapper[4975]: I0122 10:37:58.921027 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:58Z","lastTransitionTime":"2026-01-22T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.024160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.024204 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.024216 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.024233 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.024248 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.127480 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.127521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.127534 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.127552 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.127563 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.230174 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.230238 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.230255 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.230279 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.230295 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.249063 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/0.log" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.249129 4975 generic.go:334] "Generic (PLEG): container finished" podID="9fcb4bcb-0a79-4420-bb51-e4585d3306a9" containerID="6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d" exitCode=1 Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.249165 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerDied","Data":"6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.250612 4975 scope.go:117] "RemoveContainer" containerID="6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.267036 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.279937 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.292514 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.302497 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.327966 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.333485 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.333532 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.333548 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.333571 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.333587 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.340758 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.358186 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.373668 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.390312 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.402715 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.415078 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.427759 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.435437 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.435488 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.435504 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.435527 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.435543 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.436857 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.457039 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.467890 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.478683 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.490815 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.503012 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.513776 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:37:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.537782 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.537847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.537896 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.537925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.537950 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.640970 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.641015 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.641027 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.641042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.641054 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.743976 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.744054 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.744078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.744108 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.744132 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.754780 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.754851 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.754836 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:16:09.421733879 +0000 UTC Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.754780 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:37:59 crc kubenswrapper[4975]: E0122 10:37:59.754943 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:37:59 crc kubenswrapper[4975]: E0122 10:37:59.755117 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:37:59 crc kubenswrapper[4975]: E0122 10:37:59.755245 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.846395 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.846429 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.846439 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.846456 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.846468 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.948620 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.948681 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.948697 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.948721 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:37:59 crc kubenswrapper[4975]: I0122 10:37:59.948738 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:37:59Z","lastTransitionTime":"2026-01-22T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.050624 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.050658 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.050666 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.050678 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.050688 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.153251 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.153292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.153301 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.153319 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.153352 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.174986 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.175096 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.175116 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.175182 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.175208 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.191554 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.196271 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.196376 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.196439 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.196459 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.196472 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.211494 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.216464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.216509 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.216523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.216541 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.216554 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.231763 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.235701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.235732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.235740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.235752 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.235761 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.248454 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.252191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.252234 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.252245 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.252265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.252278 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.254276 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/0.log" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.254318 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerStarted","Data":"fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608"} Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.270183 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.270377 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.271787 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.271824 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.271835 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.271848 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.271856 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.271896 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.283545 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.301027 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.351611 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.365480 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.374464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.374535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.374553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.374579 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.374599 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.380036 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.390974 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.408802 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.423093 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.434136 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.445091 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.457816 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.471180 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.476671 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.476714 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.476736 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.476751 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.476761 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.479648 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.489624 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.499453 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.514214 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.541400 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.561032 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.579068 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.579097 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.579105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.579118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.579127 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.681474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.681528 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.681540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.681559 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.681571 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.754687 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.754874 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.754992 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:41:47.00695941 +0000 UTC Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.755911 4975 scope.go:117] "RemoveContainer" containerID="f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814" Jan 22 10:38:00 crc kubenswrapper[4975]: E0122 10:38:00.756204 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.785100 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.785191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.785219 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.785252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.785314 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.887539 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.887593 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.887604 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.887622 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.887635 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.990874 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.990922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.990935 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.990955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:00 crc kubenswrapper[4975]: I0122 10:38:00.990966 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:00Z","lastTransitionTime":"2026-01-22T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.094035 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.094131 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.094149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.094174 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.094193 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.197737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.197785 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.197796 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.197813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.197826 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.302053 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.302109 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.302125 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.302186 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.302204 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.405018 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.405085 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.405101 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.405126 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.405142 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.507404 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.507467 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.507476 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.507490 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.507500 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.610181 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.610265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.610290 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.610320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.610377 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.714098 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.714159 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.714179 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.714209 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.714229 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.759124 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:01 crc kubenswrapper[4975]: E0122 10:38:01.759325 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.759456 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:01 crc kubenswrapper[4975]: E0122 10:38:01.759547 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.759863 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:01 crc kubenswrapper[4975]: E0122 10:38:01.759945 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.760200 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:59:07.830121018 +0000 UTC Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.818192 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.818250 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.818273 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.818302 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.818397 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.921616 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.921665 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.921681 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.921703 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:01 crc kubenswrapper[4975]: I0122 10:38:01.921721 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:01Z","lastTransitionTime":"2026-01-22T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.024322 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.024400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.024416 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.024439 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.024455 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.127047 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.127095 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.127110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.127132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.127148 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.230869 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.230915 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.230930 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.230952 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.230970 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.333977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.334073 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.334125 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.334153 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.334171 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.437661 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.437706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.437724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.437746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.437763 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.540917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.540964 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.540982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.541004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.541020 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.644235 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.644288 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.644304 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.644325 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.644370 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.747523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.747572 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.747587 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.747609 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.747625 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.753773 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:02 crc kubenswrapper[4975]: E0122 10:38:02.753926 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.761200 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:25:17.404433279 +0000 UTC Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.851036 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.851089 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.851105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.851130 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.851147 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.955119 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.955199 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.955220 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.955248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:02 crc kubenswrapper[4975]: I0122 10:38:02.955266 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:02Z","lastTransitionTime":"2026-01-22T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.058088 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.058133 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.058141 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.058156 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.058165 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.160875 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.160920 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.160931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.160949 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.160960 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.266310 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.266406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.266431 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.266455 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.266474 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.369829 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.369895 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.369913 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.369940 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.369961 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.472788 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.472830 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.472840 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.472857 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.472868 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.576296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.576410 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.576434 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.576467 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.576492 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.680109 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.680178 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.680195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.680219 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.680241 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.753852 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.753981 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:03 crc kubenswrapper[4975]: E0122 10:38:03.754202 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.754367 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:03 crc kubenswrapper[4975]: E0122 10:38:03.754451 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:03 crc kubenswrapper[4975]: E0122 10:38:03.754548 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.761801 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:08:07.678177492 +0000 UTC Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.783201 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.783260 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.783277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.783301 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.783322 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.885755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.885828 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.885848 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.885871 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.885888 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.988947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.989017 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.989037 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.989061 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:03 crc kubenswrapper[4975]: I0122 10:38:03.989081 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:03Z","lastTransitionTime":"2026-01-22T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.092365 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.092411 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.092423 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.092447 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.092460 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.195538 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.195604 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.195623 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.195647 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.195700 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.298356 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.298403 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.298419 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.298443 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.298460 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.401713 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.401761 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.401777 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.401802 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.401819 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.505141 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.505209 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.505226 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.505253 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.505270 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.608444 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.608632 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.608657 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.608683 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.608701 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.712098 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.712146 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.712163 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.712186 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.712203 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.754531 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:04 crc kubenswrapper[4975]: E0122 10:38:04.754735 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.762593 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:51:09.127841091 +0000 UTC Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.815743 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.816101 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.816241 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.816420 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.816558 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.919905 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.919960 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.919977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.920003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:04 crc kubenswrapper[4975]: I0122 10:38:04.920021 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:04Z","lastTransitionTime":"2026-01-22T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.023039 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.023081 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.023092 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.023110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.023121 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.125328 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.125407 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.125424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.125447 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.125465 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.234981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.235042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.235059 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.235085 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.235107 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.338273 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.338378 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.338393 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.338411 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.338426 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.442856 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.442892 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.442909 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.442930 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.442944 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.546217 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.546281 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.546298 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.546320 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.546369 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.649483 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.649575 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.649603 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.649637 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.649661 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.752203 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.752263 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.752280 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.752305 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.752322 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.754524 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.754583 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.754543 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:05 crc kubenswrapper[4975]: E0122 10:38:05.754740 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:05 crc kubenswrapper[4975]: E0122 10:38:05.754842 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:05 crc kubenswrapper[4975]: E0122 10:38:05.755122 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.762725 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:34:12.028161476 +0000 UTC Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.854960 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.855011 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.855032 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.855055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.855072 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.958259 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.958371 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.958397 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.958427 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:05 crc kubenswrapper[4975]: I0122 10:38:05.958461 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:05Z","lastTransitionTime":"2026-01-22T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.062038 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.062112 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.062135 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.062162 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.062184 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.165231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.165268 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.165278 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.165292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.165302 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.268048 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.268109 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.268128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.268151 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.268169 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.370917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.370997 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.371014 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.371035 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.371049 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.473733 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.473815 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.473836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.473871 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.473893 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.577400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.577461 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.577472 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.577491 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.577504 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.680733 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.680787 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.680797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.680819 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.680830 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.754723 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:06 crc kubenswrapper[4975]: E0122 10:38:06.754914 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.763772 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:18:06.162343991 +0000 UTC Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.783596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.783636 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.783647 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.783662 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.783675 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.886095 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.886177 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.886202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.886226 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.886244 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.989022 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.989106 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.989132 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.989167 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:06 crc kubenswrapper[4975]: I0122 10:38:06.989191 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:06Z","lastTransitionTime":"2026-01-22T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.091741 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.091802 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.091824 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.091853 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.091874 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.195098 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.195168 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.195187 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.195215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.195234 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.298437 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.298545 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.298563 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.298586 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.298602 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.401122 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.401184 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.401196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.401213 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.401225 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.504464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.504556 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.504576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.504598 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.504613 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.607169 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.607252 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.607273 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.607301 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.607318 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.710723 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.710850 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.710872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.710931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.710957 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.754663 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:07 crc kubenswrapper[4975]: E0122 10:38:07.754812 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.754932 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:07 crc kubenswrapper[4975]: E0122 10:38:07.755150 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.755243 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:07 crc kubenswrapper[4975]: E0122 10:38:07.755347 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.765253 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:31:53.09315522 +0000 UTC Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.774159 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.800156 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.815955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.816008 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.816024 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.816051 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.816068 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.825247 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.846511 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.876033 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.900257 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.919513 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.921134 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.921176 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.921194 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.921219 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.921239 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:07Z","lastTransitionTime":"2026-01-22T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.937817 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.974022 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:07 crc kubenswrapper[4975]: I0122 10:38:07.991114 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:07Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.013139 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.024390 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.024450 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.024465 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.024488 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.024502 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.034557 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.051947 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.073732 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.097796 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.114229 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.125758 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.127703 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.127747 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.127765 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.127791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.127809 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.138289 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.156640 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:08Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.230393 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.230454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.230473 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.230496 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.230514 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.333786 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.333826 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.333837 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.333852 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.333862 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.437351 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.437393 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.437405 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.437424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.437436 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.540453 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.540502 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.540513 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.540535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.540547 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.643596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.643676 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.643703 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.643737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.643760 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.746500 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.746554 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.746565 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.746582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.746596 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.753751 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:08 crc kubenswrapper[4975]: E0122 10:38:08.753860 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.766150 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:46:50.995943102 +0000 UTC Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.849404 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.849504 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.849533 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.849561 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.849581 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.951874 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.951940 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.951958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.951983 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:08 crc kubenswrapper[4975]: I0122 10:38:08.952000 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:08Z","lastTransitionTime":"2026-01-22T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.055496 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.055592 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.055617 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.055648 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.055671 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.158724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.158758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.158767 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.158778 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.158786 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.261401 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.261721 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.261867 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.262001 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.262176 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.365420 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.366009 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.366054 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.366078 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.366098 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.469137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.469643 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.469836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.470021 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.470234 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.573952 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.574016 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.574034 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.574059 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.574078 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.677440 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.677748 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.677928 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.678086 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.678232 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.754263 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.754400 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:09 crc kubenswrapper[4975]: E0122 10:38:09.754465 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:09 crc kubenswrapper[4975]: E0122 10:38:09.754670 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.754949 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:09 crc kubenswrapper[4975]: E0122 10:38:09.755386 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.766890 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:21:08.241485764 +0000 UTC Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.781556 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.781613 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.781635 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.781667 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.781691 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.885967 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.886041 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.886063 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.886301 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.886353 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.988635 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.988708 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.988732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.988763 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:09 crc kubenswrapper[4975]: I0122 10:38:09.988784 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:09Z","lastTransitionTime":"2026-01-22T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.090670 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.090737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.090759 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.090789 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.090813 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.194156 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.194217 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.194236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.194260 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.194278 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.296969 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.297067 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.297087 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.297110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.297127 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.353815 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.353847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.353856 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.353870 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.353881 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.375942 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.381640 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.381706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.381719 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.381736 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.381769 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.402062 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.408409 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.408486 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.408513 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.408544 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.408567 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.424235 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.428982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.429035 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.429051 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.429074 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.429092 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.447899 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.452580 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.452855 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.452887 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.452961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.452986 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.472589 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.472917 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.475033 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.475129 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.475149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.475211 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.475242 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.578373 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.578438 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.578454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.578480 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.578498 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.646862 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.647140 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.647240 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.647510 4975 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.647625 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:14.647595333 +0000 UTC m=+147.305631493 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.648008 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:14.647985305 +0000 UTC m=+147.306021465 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.648158 4975 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.648245 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:14.648211481 +0000 UTC m=+147.306247631 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.748426 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.748512 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.748716 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.748740 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.748759 4975 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.748822 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:14.748799841 +0000 UTC m=+147.406835991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.749062 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.749081 4975 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.749095 4975 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.749134 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:14.74912061 +0000 UTC m=+147.407156760 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.920855 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:10 crc kubenswrapper[4975]: E0122 10:38:10.921187 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.924872 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:14:56.390742775 +0000 UTC Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.925158 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.925195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.925209 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.925228 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:10 crc kubenswrapper[4975]: I0122 10:38:10.925248 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:10Z","lastTransitionTime":"2026-01-22T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.028319 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.028406 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.028460 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.028483 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.028500 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.131382 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.131452 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.131476 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.131505 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.131528 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.234254 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.234651 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.234785 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.234931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.235054 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.337694 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.337757 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.337782 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.337809 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.337830 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.440424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.440477 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.440488 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.440504 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.440516 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.542981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.543029 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.543041 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.543063 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.543075 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.645737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.645797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.645814 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.645837 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.645855 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.748626 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.748660 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.748670 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.748686 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.748697 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.757465 4975 scope.go:117] "RemoveContainer" containerID="f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.757831 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:11 crc kubenswrapper[4975]: E0122 10:38:11.757894 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.758033 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:11 crc kubenswrapper[4975]: E0122 10:38:11.758087 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.758206 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:11 crc kubenswrapper[4975]: E0122 10:38:11.758268 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.851757 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.852192 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.852212 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.852237 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.852254 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.925952 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:19:55.13055619 +0000 UTC Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.956096 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.956157 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.956179 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.956215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:11 crc kubenswrapper[4975]: I0122 10:38:11.956240 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:11Z","lastTransitionTime":"2026-01-22T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.059469 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.059535 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.059553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.059582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.059600 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.163018 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.163094 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.163112 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.163135 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.163155 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.266052 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.266089 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.266097 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.266110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.266121 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.368542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.368601 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.368618 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.368642 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.368659 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.471654 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.471706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.471717 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.471736 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.471772 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.573705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.573733 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.573740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.573752 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.573760 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.676899 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.676948 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.676965 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.676989 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.677006 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.754500 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:12 crc kubenswrapper[4975]: E0122 10:38:12.754713 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.780396 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.780469 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.780494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.780526 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.780548 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.883487 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.883534 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.883583 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.883605 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.883622 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.926527 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:52:46.790657275 +0000 UTC Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.986269 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.986326 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.986363 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.986386 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:12 crc kubenswrapper[4975]: I0122 10:38:12.986401 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:12Z","lastTransitionTime":"2026-01-22T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.089268 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.089446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.089469 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.089503 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.089527 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.192244 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.192324 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.192377 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.192402 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.192419 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.295123 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.295172 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.295184 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.295202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.295214 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.302838 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/2.log" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.306019 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.306609 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.321631 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.336016 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.367599 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.385101 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.398464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.398504 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.398515 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.398531 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.398544 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.403502 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.425068 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.442977 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.458747 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.476097 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.487623 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.498787 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.500171 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.500202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.500215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.500232 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.500244 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.522069 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.540283 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.556117 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.568085 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.585088 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.600813 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.602307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.602400 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.602419 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.602444 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.602461 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.613841 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.628743 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.705614 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.705676 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.705695 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.705718 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.705736 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.754508 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.754518 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.754664 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:13 crc kubenswrapper[4975]: E0122 10:38:13.754842 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:13 crc kubenswrapper[4975]: E0122 10:38:13.754991 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:13 crc kubenswrapper[4975]: E0122 10:38:13.755089 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.807958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.808028 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.808048 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.808077 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.808099 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.910949 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.911013 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.911029 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.911051 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.911093 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:13Z","lastTransitionTime":"2026-01-22T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:13 crc kubenswrapper[4975]: I0122 10:38:13.927392 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:32:23.185755293 +0000 UTC Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.014502 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.014562 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.014579 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.014603 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.014620 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.145550 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.145624 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.145643 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.145669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.145686 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.248757 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.248794 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.248805 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.248822 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.248835 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.312374 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/3.log" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.313580 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/2.log" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.318768 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" exitCode=1 Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.318820 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.318942 4975 scope.go:117] "RemoveContainer" containerID="f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.320170 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:38:14 crc kubenswrapper[4975]: E0122 10:38:14.320560 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.339809 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.353107 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.353157 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.353173 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.353196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.353215 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.356031 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.373943 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.390075 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.410110 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.429473 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.445502 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.456501 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.456560 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.456581 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.456610 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.456627 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.479524 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f227a171709ade87040b42f0d28a3e7a3194d1fdab933d43e35bb6fa6bce4814\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:43Z\\\",\\\"message\\\":\\\" obj_retry.go:551] Creating *factory.egressNode crc took: 3.572026ms\\\\nI0122 10:37:43.881635 6660 factory.go:1336] Added *v1.Node event handler 7\\\\nI0122 10:37:43.881675 6660 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 10:37:43.881696 6660 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 10:37:43.881701 6660 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:37:43.881731 6660 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:37:43.881747 6660 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:37:43.881768 6660 factory.go:656] Stopping watch factory\\\\nI0122 10:37:43.881775 6660 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:37:43.881797 6660 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:37:43.881809 6660 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:37:43.881813 6660 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:37:43.882123 6660 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:37:43.882253 6660 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:37:43.882304 6660 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:37:43.882383 6660 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:37:43.882489 6660 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:38:14Z\\\",\\\"message\\\":\\\"al\\\\nI0122 10:38:13.713291 7080 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 10:38:13.710906 7080 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713434 7080 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:38:13.713489 7080 factory.go:656] Stopping watch factory\\\\nI0122 10:38:13.713541 7080 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713180 7080 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:38:13.713629 7080 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:38:13.713661 7080 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:38:13.713810 7080 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:38:13.713858 7080 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:38:13.713878 7080 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:38:13.713886 7080 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:38:13.713909 7080 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:38:13.714175 7080 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:38:13.714258 7080 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:38:13.714398 7080 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.503073 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.524742 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.544043 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.559423 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.559486 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.559503 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.559528 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.559546 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.571059 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.590361 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.609286 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.628368 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.645519 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.662974 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.663036 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.663059 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.663082 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.663099 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.680895 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.697690 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.715901 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:14Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.754239 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:14 crc kubenswrapper[4975]: E0122 10:38:14.754466 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.765860 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.765913 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.765931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.765956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.765974 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.869371 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.869441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.869458 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.869483 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.869499 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.927959 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:33:14.933516188 +0000 UTC Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.972839 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.972907 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.972931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.972956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:14 crc kubenswrapper[4975]: I0122 10:38:14.972973 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:14Z","lastTransitionTime":"2026-01-22T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.076398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.076459 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.076474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.076499 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.076517 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.179776 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.180207 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.180574 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.180744 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.180864 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.285144 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.285206 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.285225 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.285253 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.285272 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.326100 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/3.log" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.331513 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:38:15 crc kubenswrapper[4975]: E0122 10:38:15.331760 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.352537 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.375098 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.387905 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.387961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.387979 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.388005 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.388023 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.397432 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.414234 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.431896 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.445938 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.478275 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.491437 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.491492 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.491509 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.491532 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.491552 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.495866 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.511292 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.526852 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.542653 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.558448 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.572195 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.591487 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.595155 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.595190 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.595203 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.595220 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.595233 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.612157 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.628439 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.652802 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:38:14Z\\\",\\\"message\\\":\\\"al\\\\nI0122 10:38:13.713291 7080 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 10:38:13.710906 7080 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713434 7080 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:38:13.713489 7080 factory.go:656] Stopping watch factory\\\\nI0122 10:38:13.713541 7080 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713180 7080 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:38:13.713629 7080 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:38:13.713661 7080 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:38:13.713810 7080 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:38:13.713858 7080 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:38:13.713878 7080 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:38:13.713886 7080 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:38:13.713909 7080 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:38:13.714175 7080 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:38:13.714258 7080 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:38:13.714398 7080 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:38:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.676635 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.696173 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:15Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.697754 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.697802 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.697821 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.697847 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.697864 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.754071 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.754115 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:15 crc kubenswrapper[4975]: E0122 10:38:15.754245 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.754256 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:15 crc kubenswrapper[4975]: E0122 10:38:15.754312 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:15 crc kubenswrapper[4975]: E0122 10:38:15.754391 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.800381 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.800413 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.800421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.800436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.800446 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.903200 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.903253 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.903270 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.903294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.903311 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:15Z","lastTransitionTime":"2026-01-22T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:15 crc kubenswrapper[4975]: I0122 10:38:15.928856 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:28:30.949740867 +0000 UTC Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.005940 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.006004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.006020 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.006044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.006062 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.108697 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.108758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.108775 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.108798 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.108814 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.212446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.212704 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.212773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.212805 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.212823 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.316068 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.316125 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.316142 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.316165 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.316182 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.418687 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.418740 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.418758 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.418782 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.418799 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.522451 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.522506 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.522523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.522547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.522563 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.626094 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.626172 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.626191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.626215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.626234 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.729237 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.729312 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.729405 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.729436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.729461 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.754746 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:16 crc kubenswrapper[4975]: E0122 10:38:16.755113 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.831854 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.831912 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.831930 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.831953 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.831979 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.930023 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:33:45.914044265 +0000 UTC Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.934299 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.934392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.934415 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.934442 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:16 crc kubenswrapper[4975]: I0122 10:38:16.934459 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:16Z","lastTransitionTime":"2026-01-22T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.037562 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.037678 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.037701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.037730 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.037752 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.140469 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.140544 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.140565 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.140599 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.140626 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.244844 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.244922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.244946 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.244981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.245006 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.347416 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.347477 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.347493 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.347516 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.347533 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.450209 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.450273 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.450296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.450327 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.450383 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.553159 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.553234 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.553257 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.553284 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.553300 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.655791 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.655864 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.655880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.655905 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.655920 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.754549 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.754563 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.754637 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:17 crc kubenswrapper[4975]: E0122 10:38:17.755415 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:17 crc kubenswrapper[4975]: E0122 10:38:17.755740 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:17 crc kubenswrapper[4975]: E0122 10:38:17.755841 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.759554 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.759605 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.759627 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.759653 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.759675 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.771650 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.794014 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.815144 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.836493 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.852372 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.862719 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.862780 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.862797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.862820 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.862838 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.869147 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.889609 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.907650 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.930291 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:19:47.405035722 +0000 UTC Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.945178 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.966381 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.966430 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.966446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.966471 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.966488 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:17Z","lastTransitionTime":"2026-01-22T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.967426 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:17 crc kubenswrapper[4975]: I0122 10:38:17.990285 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:17Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.011453 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.029296 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.045032 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.068004 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.069980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.070044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.070069 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.070098 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.070119 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.085892 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.105537 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.121296 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.151111 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:38:14Z\\\",\\\"message\\\":\\\"al\\\\nI0122 10:38:13.713291 7080 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 10:38:13.710906 7080 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713434 7080 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:38:13.713489 7080 factory.go:656] Stopping watch factory\\\\nI0122 10:38:13.713541 7080 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713180 7080 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:38:13.713629 7080 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:38:13.713661 7080 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:38:13.713810 7080 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:38:13.713858 7080 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:38:13.713878 7080 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:38:13.713886 7080 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:38:13.713909 7080 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:38:13.714175 7080 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:38:13.714258 7080 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:38:13.714398 7080 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:38:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.173426 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.173503 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.173529 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.173560 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.173583 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.275179 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.275233 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.275246 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.275262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.275273 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.377649 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.378083 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.378232 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.378392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.378514 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.480968 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.481042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.481171 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.481205 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.481226 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.584049 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.584090 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.584101 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.584118 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.584129 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.686716 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.686796 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.686815 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.686842 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.686862 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.754627 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:18 crc kubenswrapper[4975]: E0122 10:38:18.754821 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.789640 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.789701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.789720 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.789745 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.789763 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.892593 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.892629 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.892641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.892656 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.892668 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.930814 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:49:19.537326675 +0000 UTC Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.995859 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.995921 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.995938 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.995961 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:18 crc kubenswrapper[4975]: I0122 10:38:18.995978 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:18Z","lastTransitionTime":"2026-01-22T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.098955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.099025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.099043 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.099069 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.099086 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.202239 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.202295 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.202314 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.202364 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.202381 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.306430 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.306507 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.306527 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.306555 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.306573 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.409455 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.409520 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.409537 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.409564 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.409582 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.515083 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.515139 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.515152 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.515177 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.515189 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.618559 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.618608 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.618622 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.618641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.618653 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.721834 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.722191 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.722380 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.722536 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.722708 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.754471 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.754575 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.754481 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:19 crc kubenswrapper[4975]: E0122 10:38:19.754674 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:19 crc kubenswrapper[4975]: E0122 10:38:19.754846 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:19 crc kubenswrapper[4975]: E0122 10:38:19.754969 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.825468 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.825547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.825566 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.825591 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.825607 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.928442 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.928818 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.928913 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.929033 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.929127 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:19Z","lastTransitionTime":"2026-01-22T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:19 crc kubenswrapper[4975]: I0122 10:38:19.931770 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:23:59.086961679 +0000 UTC Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.032303 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.032416 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.032441 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.032474 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.032496 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.134764 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.134822 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.134839 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.134862 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.134879 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.238297 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.238347 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.238358 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.238390 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.238402 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.341251 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.341371 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.341391 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.341416 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.341433 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.445004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.445059 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.445075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.445099 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.445117 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.548840 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.548929 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.548949 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.548976 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.548992 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.643160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.643231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.643256 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.643292 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.643318 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.665218 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.671105 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.671172 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.671194 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.671221 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.671241 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.690432 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.696675 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.696737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.696748 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.696764 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.696776 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.714830 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.719965 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.720004 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.720013 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.720033 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.720043 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.737821 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.742351 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.742396 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.742407 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.742453 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.742467 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.754290 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.754506 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.761168 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:20 crc kubenswrapper[4975]: E0122 10:38:20.761361 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.763966 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.764041 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.764062 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.764086 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.764104 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.867103 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.867193 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.867231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.867264 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.867293 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.932987 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:43:20.039140811 +0000 UTC Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.970668 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.970746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.970765 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.970789 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:20 crc kubenswrapper[4975]: I0122 10:38:20.970807 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:20Z","lastTransitionTime":"2026-01-22T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.074222 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.074318 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.074378 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.074404 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.074461 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.177907 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.177987 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.178017 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.178047 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.178070 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.281462 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.281546 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.281567 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.281591 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.281608 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.384282 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.384360 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.384380 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.384403 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.384422 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.486789 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.486822 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.486830 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.486842 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.486851 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.589453 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.589598 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.589619 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.589644 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.589661 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.693458 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.693557 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.693574 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.693598 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.693621 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.754599 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.754639 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:21 crc kubenswrapper[4975]: E0122 10:38:21.754809 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.755077 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:21 crc kubenswrapper[4975]: E0122 10:38:21.755206 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:21 crc kubenswrapper[4975]: E0122 10:38:21.755625 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.796446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.796533 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.796550 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.796576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.796594 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.907913 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.907977 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.907994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.908020 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.908058 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:21Z","lastTransitionTime":"2026-01-22T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:21 crc kubenswrapper[4975]: I0122 10:38:21.933898 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:18:05.650555001 +0000 UTC Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.011314 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.011454 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.011486 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.011517 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.011540 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.114906 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.114982 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.115000 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.115025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.115043 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.222071 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.222146 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.222156 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.222182 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.222196 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.324777 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.324824 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.324836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.324853 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.324865 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.427666 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.427729 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.427748 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.427771 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.427788 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.530835 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.530901 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.530918 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.530943 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.530962 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.635188 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.635265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.635281 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.635314 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.635358 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.738994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.739067 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.739084 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.739113 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.739132 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.754700 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:22 crc kubenswrapper[4975]: E0122 10:38:22.754869 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.842984 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.843040 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.843058 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.843084 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.843102 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.934586 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:14:33.204176683 +0000 UTC Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.946008 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.946087 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.946106 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.946131 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:22 crc kubenswrapper[4975]: I0122 10:38:22.946150 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:22Z","lastTransitionTime":"2026-01-22T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.049395 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.049457 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.049475 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.049498 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.049514 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.152522 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.152570 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.152586 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.152603 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.152618 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.254587 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.254685 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.254704 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.254728 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.254746 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.356724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.356788 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.356798 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.356813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.356828 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.459305 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.459366 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.459378 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.459396 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.459407 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.562436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.562553 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.562575 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.562604 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.562624 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.665507 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.665582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.665599 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.665623 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.665640 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.753946 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.754028 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:23 crc kubenswrapper[4975]: E0122 10:38:23.754127 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:23 crc kubenswrapper[4975]: E0122 10:38:23.754245 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.754455 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:23 crc kubenswrapper[4975]: E0122 10:38:23.754600 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.768677 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.768728 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.768745 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.768769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.768789 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.872001 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.872131 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.872153 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.872176 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.872195 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.935402 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:03:39.470415347 +0000 UTC Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.976164 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.976224 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.976240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.976265 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:23 crc kubenswrapper[4975]: I0122 10:38:23.976283 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:23Z","lastTransitionTime":"2026-01-22T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.079903 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.079970 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.079993 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.080023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.080047 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.182602 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.182665 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.182682 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.182710 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.182727 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.285302 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.285409 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.285433 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.285461 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.285484 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.388160 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.388230 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.388251 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.388275 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.388294 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.491540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.491630 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.491668 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.491699 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.491725 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.594863 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.595003 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.595086 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.595120 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.595140 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.697528 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.697593 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.697612 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.697637 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.697655 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.754199 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:24 crc kubenswrapper[4975]: E0122 10:38:24.754439 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.800912 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.800979 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.800998 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.801019 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.801038 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.904776 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.904836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.904853 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.904879 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.904899 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:24Z","lastTransitionTime":"2026-01-22T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:24 crc kubenswrapper[4975]: I0122 10:38:24.936321 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:59:04.613172396 +0000 UTC Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.007032 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.007098 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.007114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.007141 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.007159 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.110534 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.110594 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.110604 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.110624 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.110636 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.214230 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.214291 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.214313 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.214367 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.214429 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.317611 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.317706 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.317725 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.317750 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.317768 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.420390 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.420462 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.420484 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.420515 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.420539 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.524038 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.524128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.524151 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.524186 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.524211 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.627744 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.627813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.627835 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.627863 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.627884 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.730681 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.730727 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.730739 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.730755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.730769 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.754512 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.754637 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:25 crc kubenswrapper[4975]: E0122 10:38:25.754709 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:25 crc kubenswrapper[4975]: E0122 10:38:25.754853 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.754958 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:25 crc kubenswrapper[4975]: E0122 10:38:25.755039 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.833636 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.833708 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.833728 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.833754 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.833771 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.936781 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:30:29.595124992 +0000 UTC Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.937088 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.937143 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.937166 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.937201 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:25 crc kubenswrapper[4975]: I0122 10:38:25.937219 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:25Z","lastTransitionTime":"2026-01-22T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.040497 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.040593 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.040644 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.040669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.040686 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.143362 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.143420 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.143443 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.143469 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.143490 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.246379 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.246475 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.246494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.246518 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.246535 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.349578 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.349657 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.349680 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.349708 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.349730 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.452865 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.452925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.452944 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.452971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.452992 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.556615 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.556681 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.556698 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.556723 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.556739 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.660249 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.660384 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.660410 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.660442 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.660467 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.754749 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:26 crc kubenswrapper[4975]: E0122 10:38:26.754943 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.763672 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.763726 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.763746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.763769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.763788 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.867231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.867351 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.867370 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.867397 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.867415 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.937235 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:27:29.602450129 +0000 UTC Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.970149 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.970225 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.970248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.970282 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:26 crc kubenswrapper[4975]: I0122 10:38:26.970301 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:26Z","lastTransitionTime":"2026-01-22T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.073925 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.073989 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.074007 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.074034 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.074054 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.177424 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.177494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.177513 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.177539 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.177557 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.280813 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.280880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.280897 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.280926 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.280944 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.383695 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.383755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.383773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.383796 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.383816 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.486710 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.486776 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.486793 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.486820 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.486841 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.589917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.589987 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.590006 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.590030 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.590052 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.693327 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.693432 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.693451 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.693479 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.693499 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.755004 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:27 crc kubenswrapper[4975]: E0122 10:38:27.755129 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.755004 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.755010 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:27 crc kubenswrapper[4975]: E0122 10:38:27.755288 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:27 crc kubenswrapper[4975]: E0122 10:38:27.755380 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.774270 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef176a2-eda6-4bfe-a0df-a2fdcd1c29d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daaac3f99e6dbe2c2e0a6704100e2a699fd2f0e544bd85f6afc019dc9da681f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b19596dc8de828280e2a7eab629b10d8fabe9df354df7c586727015c857a38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b2078bf6af7056bc3f98749c679ae05ef67871ef11413728720a0d2fb8dd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.795763 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.796804 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.796869 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.796891 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.796922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.796945 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.808718 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cqvxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2f714de-8040-41dc-b7d1-735a879df670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4de937c78815ff2202fc380bc8828300678f0eb1a5edb7f8c721f6881fbbcde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jh8dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cqvxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.832243 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1de894a-1cf5-47fa-9fc5-0a52783fe152\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:38:14Z\\\",\\\"message\\\":\\\"al\\\\nI0122 10:38:13.713291 7080 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 10:38:13.710906 7080 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713434 7080 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:38:13.713489 7080 factory.go:656] Stopping watch factory\\\\nI0122 10:38:13.713541 7080 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:38:13.713180 7080 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 10:38:13.713629 7080 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 10:38:13.713661 7080 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:38:13.713810 7080 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 10:38:13.713858 7080 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 10:38:13.713878 7080 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 10:38:13.713886 7080 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 10:38:13.713909 7080 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 10:38:13.714175 7080 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:38:13.714258 7080 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 10:38:13.714398 7080 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:38:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mwgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t5gcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.856810 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f7b040-017f-4aa7-ae02-cda6b7a89026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2528e197dd5e79f805ff51c56a7054aef5bd5e72616faaa8ee499bc6575a73c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5941e9619f6e1f94864a7dccfebbf897b46fd1f9975f57988a2495aa089ffe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02a4951441f5faec82897b24dd2fda8a90a27765ff994fe27164daf6f5e52796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47be8736290206a6bb2355c9e39289eb19f0890c0339dbafdc97b75eb273aaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03c0a5f84cd52aee888e48748ef7d38ea61d82f3fcc93206b9ff6dd7927811c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4cf8d0185636a88545749d88fb4151590fb32e4f7744b2427bf99a2307a19dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42712923ce6bf96c80f167c2912dbd5e8f54beba972a864f229a6bd20e9d176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pscbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7c9rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.879769 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01133b76-3a28-4517-98fd-2526adefd865\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:37:00.237040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:37:00.238672 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2036507430/tls.crt::/tmp/serving-cert-2036507430/tls.key\\\\\\\"\\\\nI0122 10:37:05.841824 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:37:05.844857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:37:05.844885 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:37:05.844931 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:37:05.844941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:37:05.852733 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:37:05.852774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:37:05.852795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:37:05.852804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:37:05.852811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:37:05.852817 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:37:05.853104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:37:05.854850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.898865 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d330fa2b1d83bad07e31d91ea4fe94131e4624e2d47c0eb80cb3cde7d722df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc2076bb1c864af5c29e9fe0ed34e6e9b1ec694ddaaf7c0be5c0f42eb5e351fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.902433 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.902477 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.902492 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.902514 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.902532 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:27Z","lastTransitionTime":"2026-01-22T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.919202 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmklh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fcb4bcb-0a79-4420-bb51-e4585d3306a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:37:58Z\\\",\\\"message\\\":\\\"2026-01-22T10:37:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918\\\\n2026-01-22T10:37:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_defaca94-9467-4938-b14d-8fd087e9e918 to /host/opt/cni/bin/\\\\n2026-01-22T10:37:13Z [verbose] multus-daemon started\\\\n2026-01-22T10:37:13Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:37:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:37:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhnd8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmklh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.936315 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d891ff8-56eb-4f71-bdb4-638cf165511f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da039826ae5d51411fd0094b8e883f8ffa49a3fefae14a300bf24016b118c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://813be3a74ef47b78858ef5f0ce2bdd983360af919760af7818932c9852e759ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzl8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdg4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.938445 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:53:02.424152657 +0000 UTC Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.953291 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be08c7-0f3c-47b2-b15c-94dea396c9e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5jbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.969495 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e6b770-c919-4ebe-85e2-b2b48ba4e331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8481aaa8a6cc077411844e7badb5111e294a17d12dc5eaaf9cd361057fa36f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2185a045ba4b8ef3a20b8f074c17c8d60f849f931f7ca5729df54eddbf956330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42d405df9d717ce8e15cb3bb2b2a4a9a3ce770013b28d0f8d4c5aff61034829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b89fe918e8c1962ff34fa46ee1db41c2e3cf4b6a7988609c9a7dd4814d86b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:27 crc kubenswrapper[4975]: I0122 10:38:27.982474 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf009362-788d-4aa1-b72a-65d7963e0db5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643c739c7c349269ce86d3272b9aa566b7e8664c176a044d4ebfb08a20ed0a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f764b8c1f0d4b58b85dc97698588f59eb8009fa4f23bf33311c364f1a3e0363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:27Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.004981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.005060 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.005083 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.005115 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.005139 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.015772 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4633217-f34c-4863-9643-60220b8b1998\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc931424abaec3bcca41c426c4e335918361f159590c8e7765ffc4d056d47369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82f3671475a35e0187910af7fcc9cc7d266da37874cee5d4e44f067b6729d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57a673144f61590f215755422521a465d503b0ec62457b20bbb8976eca10787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a6ba9a70d62980e3b103fdc91fa8dac43f84fa53f101903616162a4688d3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a8ccec71c5ea185acc85002dc25223ab42b35d74768f7339d882782e9cdfc9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c01213cf75afa63477eceed902c7ebf54f7c5227d24e41f7464b630d5391b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8946beed60d9c65de649c0403691c1370d72bf139c6856dc8c4ff46f2e11b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://379cbfac7f9f82cf546c3c60a1aa528e57acc1183366fc96b3a45e6f5262ff11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:36:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.034407 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.050926 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3db4f0fd7a9972866153d2e5c6f008d8464e18b9b5c4889b5ada2f28394c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.071022 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30b8724160c2fb4a0f30f166fc733662a7648000e4f1115e4fe6d8caae6040e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.090370 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.104614 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c717ed91-9c30-47bc-b814-ceb1b25939a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c75fd3ba75b7c8ce1cf7d04cdb62f8b8f4d2cd7c25e6e722a7bbef789109ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fmw72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.108737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.108809 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.108832 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.108863 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.108889 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.122641 4975 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zdgbb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f152e5a-dd64-45fc-9a8f-31652989056a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff9b8c913feb61e4ecd77259b64368a8b82587170501936b925d4eab55fed9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lclx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:37:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zdgbb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:28Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.211645 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.211705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.211723 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.211746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.211767 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.314685 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.314759 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.314782 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.314810 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.314833 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.417794 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.417865 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.417884 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.417917 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.417936 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.521307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.521412 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.521435 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.521466 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.521491 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.624473 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.624527 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.624540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.624560 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.624574 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.728302 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.728463 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.728495 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.728533 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.728560 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.753930 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:28 crc kubenswrapper[4975]: E0122 10:38:28.754779 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.755367 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:38:28 crc kubenswrapper[4975]: E0122 10:38:28.755686 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.831880 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.831938 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.831955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.831980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.831997 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.934552 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.934628 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.934651 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.934677 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.934698 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:28Z","lastTransitionTime":"2026-01-22T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:28 crc kubenswrapper[4975]: I0122 10:38:28.938992 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:49:55.352231661 +0000 UTC Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.037684 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.037731 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.037746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.037769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.037785 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.140810 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.140872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.140910 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.140944 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.140966 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.221643 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:29 crc kubenswrapper[4975]: E0122 10:38:29.221973 4975 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:38:29 crc kubenswrapper[4975]: E0122 10:38:29.222089 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs podName:50be08c7-0f3c-47b2-b15c-94dea396c9e1 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:33.222058363 +0000 UTC m=+165.880094513 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs") pod "network-metrics-daemon-v5jbd" (UID: "50be08c7-0f3c-47b2-b15c-94dea396c9e1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.243409 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.243465 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.243481 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.243507 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.243524 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.346243 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.346294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.346312 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.346365 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.346384 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.449380 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.449420 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.449429 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.449448 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.449458 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.552417 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.552480 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.552493 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.552515 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.552527 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.656227 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.656307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.656364 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.656397 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.656433 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.754453 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:29 crc kubenswrapper[4975]: E0122 10:38:29.754664 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.754682 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:29 crc kubenswrapper[4975]: E0122 10:38:29.754822 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.754924 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:29 crc kubenswrapper[4975]: E0122 10:38:29.755072 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.759561 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.759677 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.759701 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.759731 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.759869 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.863049 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.863446 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.863599 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.863756 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.863872 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.939365 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:33:58.450095781 +0000 UTC Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.966414 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.966516 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.966542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.966572 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:29 crc kubenswrapper[4975]: I0122 10:38:29.966590 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:29Z","lastTransitionTime":"2026-01-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.069699 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.070023 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.070161 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.070370 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.070534 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.173636 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.173684 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.173696 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.173713 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.173727 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.277485 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.277841 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.278210 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.278417 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.278687 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.382043 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.382111 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.382129 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.382155 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.382173 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.485182 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.485227 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.485234 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.485248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.485257 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.589264 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.589366 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.589381 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.589403 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.589419 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.693734 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.693818 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.693840 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.693872 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.693896 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.754752 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:30 crc kubenswrapper[4975]: E0122 10:38:30.754947 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.797611 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.797690 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.797708 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.797735 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.797761 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.898567 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.898748 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.898776 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.898808 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.898832 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.940355 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:26:23.616389301 +0000 UTC Jan 22 10:38:30 crc kubenswrapper[4975]: E0122 10:38:30.950920 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:30Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.967459 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.967725 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.967757 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.967930 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:30 crc kubenswrapper[4975]: I0122 10:38:30.968017 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:30Z","lastTransitionTime":"2026-01-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:30 crc kubenswrapper[4975]: E0122 10:38:30.999365 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:30Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.004389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.004437 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.004448 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.004465 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.004477 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.022113 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.027944 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.028091 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.028167 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.028248 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.028386 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.046449 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.051833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.051878 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.051890 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.051909 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.051921 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.071092 4975 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b2a3c802-a546-4128-b88a-6c9d41dab7dc\\\",\\\"systemUUID\\\":\\\"355792bf-2b64-4c58-a9ae-ab0c7ede715b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:38:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.071290 4975 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.073723 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.073760 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.073780 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.073797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.073807 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.176468 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.176521 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.176533 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.176555 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.176571 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.280002 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.280075 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.280093 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.280124 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.280142 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.383593 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.383672 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.383697 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.383727 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.383750 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.487091 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.487259 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.487286 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.487436 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.487565 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.591194 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.591289 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.591308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.591399 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.591421 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.694973 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.695076 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.695094 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.695123 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.695147 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.754741 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.754910 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.755099 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.755303 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.755655 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:31 crc kubenswrapper[4975]: E0122 10:38:31.755768 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.798686 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.798756 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.798773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.798798 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.798821 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.901864 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.901922 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.901938 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.901958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.901970 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:31Z","lastTransitionTime":"2026-01-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:31 crc kubenswrapper[4975]: I0122 10:38:31.940843 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:04:24.906931916 +0000 UTC Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.004883 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.004956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.004971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.004994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.005018 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.108773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.108871 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.108892 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.108924 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.108947 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.211705 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.211777 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.211846 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.211916 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.211938 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.314781 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.314853 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.314878 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.314907 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.314926 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.417062 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.417111 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.417129 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.417152 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.417169 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.520733 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.520794 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.520811 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.520837 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.520856 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.623515 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.623563 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.623575 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.623592 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.623603 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.726114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.726156 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.726165 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.726203 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.726214 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.754696 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:32 crc kubenswrapper[4975]: E0122 10:38:32.754869 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.829283 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.829352 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.829366 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.829383 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.829397 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.932195 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.932258 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.932275 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.932398 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.932417 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:32Z","lastTransitionTime":"2026-01-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:32 crc kubenswrapper[4975]: I0122 10:38:32.941991 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:04:12.420891703 +0000 UTC Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.035190 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.035247 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.035273 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.035296 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.035308 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.143157 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.143277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.143307 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.143352 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.143367 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.247640 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.247677 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.247686 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.247702 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.247711 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.350893 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.350941 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.350958 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.350980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.350998 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.453803 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.453897 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.453923 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.453947 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.453963 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.556790 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.556875 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.556899 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.556932 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.556954 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.660170 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.660218 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.660235 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.660259 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.660277 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.754607 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.754728 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.754639 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:33 crc kubenswrapper[4975]: E0122 10:38:33.754841 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:33 crc kubenswrapper[4975]: E0122 10:38:33.754943 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:33 crc kubenswrapper[4975]: E0122 10:38:33.755146 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.763200 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.763253 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.763269 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.763294 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.763313 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.866523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.866596 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.866616 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.866644 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.866665 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.942783 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:42:12.535908744 +0000 UTC Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.969773 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.969845 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.969868 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.969899 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:33 crc kubenswrapper[4975]: I0122 10:38:33.969923 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:33Z","lastTransitionTime":"2026-01-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.073055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.073128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.073146 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.073171 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.073190 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.176888 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.176963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.176981 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.177012 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.177037 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.280140 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.280231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.280256 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.280295 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.280321 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.383295 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.383391 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.383410 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.383440 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.383462 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.487137 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.487247 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.487262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.487290 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.487307 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.590683 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.590732 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.590744 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.590762 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.590776 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.693578 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.693615 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.693627 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.693648 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.693658 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.754668 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:34 crc kubenswrapper[4975]: E0122 10:38:34.754866 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.796493 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.796547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.796566 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.796589 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.796605 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.899262 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.899316 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.899373 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.899401 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.899422 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:34Z","lastTransitionTime":"2026-01-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:34 crc kubenswrapper[4975]: I0122 10:38:34.942967 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:51:30.178017848 +0000 UTC Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.003184 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.003246 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.003263 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.003286 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.003303 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.106674 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.106751 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.106769 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.106797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.106815 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.209955 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.210024 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.210042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.210130 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.210154 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.313730 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.313771 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.313787 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.313815 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.313832 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.416432 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.416523 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.416547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.416580 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.416609 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.519932 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.519994 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.520009 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.520031 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.520045 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.622592 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.622652 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.622668 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.622692 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.622708 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.726565 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.726641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.726660 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.726689 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.726718 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.754158 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:35 crc kubenswrapper[4975]: E0122 10:38:35.754410 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.754499 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:35 crc kubenswrapper[4975]: E0122 10:38:35.754602 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.755171 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:35 crc kubenswrapper[4975]: E0122 10:38:35.755530 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.829231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.829275 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.829288 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.829308 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.829324 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.933013 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.933089 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.933110 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.933140 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.933162 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:35Z","lastTransitionTime":"2026-01-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:35 crc kubenswrapper[4975]: I0122 10:38:35.944152 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:56:30.825668871 +0000 UTC Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.036190 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.036670 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.036856 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.036999 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.037132 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.140841 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.140933 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.140950 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.140980 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.141003 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.244716 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.244805 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.244829 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.244858 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.244875 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.347501 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.347552 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.347568 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.347592 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.347609 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.450235 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.450298 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.450318 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.450383 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.450407 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.552971 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.553042 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.553058 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.553083 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.553100 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.655002 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.655045 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.655053 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.655067 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.655077 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.754321 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:36 crc kubenswrapper[4975]: E0122 10:38:36.754571 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.757883 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.757931 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.757942 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.757965 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.757977 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.860062 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.860207 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.860277 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.860309 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.860360 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.944993 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:57:02.816066356 +0000 UTC Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.963175 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.963213 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.963225 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.963244 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:36 crc kubenswrapper[4975]: I0122 10:38:36.963268 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:36Z","lastTransitionTime":"2026-01-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.065867 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.065945 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.065963 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.065985 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.066003 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.169817 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.169889 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.169907 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.169932 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.169951 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.272753 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.272832 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.272857 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.272887 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.272909 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.376617 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.376737 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.376755 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.376785 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.376806 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.479558 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.479621 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.479641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.479664 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.479681 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.582640 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.582710 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.582728 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.582754 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.582773 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.685691 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.685780 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.685799 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.685853 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.685871 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.754500 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.754500 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.754546 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:37 crc kubenswrapper[4975]: E0122 10:38:37.754826 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:37 crc kubenswrapper[4975]: E0122 10:38:37.754945 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:37 crc kubenswrapper[4975]: E0122 10:38:37.755125 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.788724 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.788797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.788814 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.788844 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.788864 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.840127 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7c9rx" podStartSLOduration=87.84010093 podStartE2EDuration="1m27.84010093s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:37.839583696 +0000 UTC m=+110.497619876" watchObservedRunningTime="2026-01-22 10:38:37.84010093 +0000 UTC m=+110.498137070" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.902583 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.903015 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.903203 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.903431 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.903648 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:37Z","lastTransitionTime":"2026-01-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.913735 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.913709788 podStartE2EDuration="1m32.913709788s" podCreationTimestamp="2026-01-22 10:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:37.887913021 +0000 UTC m=+110.545949171" watchObservedRunningTime="2026-01-22 10:38:37.913709788 +0000 UTC m=+110.571745938" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.945163 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:32:00.988690597 +0000 UTC Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.946585 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cqvxp" podStartSLOduration=87.946555493 podStartE2EDuration="1m27.946555493s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:37.930958409 +0000 UTC m=+110.588994559" watchObservedRunningTime="2026-01-22 10:38:37.946555493 +0000 UTC m=+110.604591643" Jan 22 10:38:37 crc kubenswrapper[4975]: I0122 10:38:37.975280 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdg4k" podStartSLOduration=86.975241611 podStartE2EDuration="1m26.975241611s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:37.946839111 +0000 UTC m=+110.604875261" watchObservedRunningTime="2026-01-22 10:38:37.975241611 +0000 UTC m=+110.633277761" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.007044 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.007094 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.007109 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.007128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.007141 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.026928 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.02689574 podStartE2EDuration="1m32.02689574s" podCreationTimestamp="2026-01-22 10:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.007739506 +0000 UTC m=+110.665775646" watchObservedRunningTime="2026-01-22 10:38:38.02689574 +0000 UTC m=+110.684931880" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.071109 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qmklh" podStartSLOduration=88.07107915 podStartE2EDuration="1m28.07107915s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.051416382 +0000 UTC m=+110.709452522" watchObservedRunningTime="2026-01-22 10:38:38.07107915 +0000 UTC m=+110.729115300" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.105812 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.105784705 podStartE2EDuration="58.105784705s" podCreationTimestamp="2026-01-22 10:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.103319057 +0000 UTC m=+110.761355187" watchObservedRunningTime="2026-01-22 10:38:38.105784705 +0000 UTC m=+110.763820825" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.109836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.109895 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.109923 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.109956 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.109977 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.120181 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=43.120159995 podStartE2EDuration="43.120159995s" podCreationTimestamp="2026-01-22 10:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.118442798 +0000 UTC m=+110.776478978" watchObservedRunningTime="2026-01-22 10:38:38.120159995 +0000 UTC m=+110.778196145" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.166148 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=93.166120175 podStartE2EDuration="1m33.166120175s" podCreationTimestamp="2026-01-22 10:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.165642231 +0000 UTC m=+110.823678361" watchObservedRunningTime="2026-01-22 10:38:38.166120175 +0000 UTC m=+110.824156295" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.179983 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zdgbb" podStartSLOduration=88.179960141 podStartE2EDuration="1m28.179960141s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.17994039 +0000 UTC m=+110.837976570" watchObservedRunningTime="2026-01-22 10:38:38.179960141 +0000 UTC m=+110.837996261" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.213215 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.213285 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.213303 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.213367 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.213393 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.241080 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podStartSLOduration=88.24105264 podStartE2EDuration="1m28.24105264s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:38.240910426 +0000 UTC m=+110.898946576" watchObservedRunningTime="2026-01-22 10:38:38.24105264 +0000 UTC m=+110.899088780" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.316100 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.316168 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.316187 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.316216 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.316237 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.418258 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.418367 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.418391 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.418422 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.418445 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.521867 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.522114 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.522125 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.522141 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.522152 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.624754 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.624812 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.624831 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.624858 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.624877 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.728457 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.728622 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.728643 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.728672 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.728690 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.754667 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:38 crc kubenswrapper[4975]: E0122 10:38:38.756159 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.832499 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.832567 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.832592 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.832623 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.832644 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.935669 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.935729 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.935746 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.935771 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.935795 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:38Z","lastTransitionTime":"2026-01-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:38 crc kubenswrapper[4975]: I0122 10:38:38.946111 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:48:11.790682977 +0000 UTC Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.039244 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.039311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.039327 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.039389 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.039406 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.142581 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.142641 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.142658 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.142682 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.142701 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.246240 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.246303 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.246319 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.246377 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.246396 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.349068 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.349129 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.349146 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.349171 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.349191 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.452494 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.452557 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.452574 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.452600 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.452618 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.556069 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.556128 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.556148 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.556172 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.556192 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.660050 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.660115 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.660131 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.660157 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.660174 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.754181 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.754186 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:39 crc kubenswrapper[4975]: E0122 10:38:39.754371 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.754219 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:39 crc kubenswrapper[4975]: E0122 10:38:39.754634 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:39 crc kubenswrapper[4975]: E0122 10:38:39.754535 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.762726 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.762778 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.762797 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.762823 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.762894 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.865723 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.865785 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.865808 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.865836 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.865858 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.946898 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:05:03.285085264 +0000 UTC Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.968466 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.968529 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.968552 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.968582 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:39 crc kubenswrapper[4975]: I0122 10:38:39.968603 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:39Z","lastTransitionTime":"2026-01-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.072101 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.072169 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.072205 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.072235 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.072255 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.175483 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.175540 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.175554 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.175576 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.175592 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.278468 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.278542 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.278566 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.278594 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.278655 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.381953 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.382025 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.382055 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.382087 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.382105 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.485311 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.485421 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.485439 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.485464 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.485485 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.588547 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.588611 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.588631 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.588656 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.588675 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.692155 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.692218 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.692236 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.692258 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.692276 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.754567 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:40 crc kubenswrapper[4975]: E0122 10:38:40.754772 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.795245 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.795326 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.795392 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.795423 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.795451 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.898519 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.898581 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.898604 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.898630 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.898651 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:40Z","lastTransitionTime":"2026-01-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:40 crc kubenswrapper[4975]: I0122 10:38:40.948014 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:41:53.972202995 +0000 UTC Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.002325 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.002414 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.002432 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.002466 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.002485 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:41Z","lastTransitionTime":"2026-01-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.105142 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.105196 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.105210 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.105231 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.105244 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:41Z","lastTransitionTime":"2026-01-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.158515 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.158833 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.159041 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.159202 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.159398 4975 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:38:41Z","lastTransitionTime":"2026-01-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.217862 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc"] Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.218573 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.221863 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.222674 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.222674 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.223586 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.383427 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108c261b-ff05-4c03-aa80-5f8cb56daf16-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.383647 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/108c261b-ff05-4c03-aa80-5f8cb56daf16-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.383718 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/108c261b-ff05-4c03-aa80-5f8cb56daf16-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.383801 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/108c261b-ff05-4c03-aa80-5f8cb56daf16-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.383884 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c261b-ff05-4c03-aa80-5f8cb56daf16-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485058 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/108c261b-ff05-4c03-aa80-5f8cb56daf16-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485106 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/108c261b-ff05-4c03-aa80-5f8cb56daf16-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485124 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/108c261b-ff05-4c03-aa80-5f8cb56daf16-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485142 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c261b-ff05-4c03-aa80-5f8cb56daf16-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485183 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108c261b-ff05-4c03-aa80-5f8cb56daf16-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485267 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/108c261b-ff05-4c03-aa80-5f8cb56daf16-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.485288 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/108c261b-ff05-4c03-aa80-5f8cb56daf16-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.486264 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/108c261b-ff05-4c03-aa80-5f8cb56daf16-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.503499 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108c261b-ff05-4c03-aa80-5f8cb56daf16-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.506689 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c261b-ff05-4c03-aa80-5f8cb56daf16-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6rtpc\" (UID: \"108c261b-ff05-4c03-aa80-5f8cb56daf16\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.541644 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.754153 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.754220 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.754242 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:41 crc kubenswrapper[4975]: E0122 10:38:41.754854 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:41 crc kubenswrapper[4975]: E0122 10:38:41.754969 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:41 crc kubenswrapper[4975]: E0122 10:38:41.755095 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.948152 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:13:53.346039676 +0000 UTC Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.948254 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 10:38:41 crc kubenswrapper[4975]: I0122 10:38:41.959311 4975 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 10:38:42 crc kubenswrapper[4975]: I0122 10:38:42.429991 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" event={"ID":"108c261b-ff05-4c03-aa80-5f8cb56daf16","Type":"ContainerStarted","Data":"193c450d2947bb5c4a5bf0d51ccfb68110fe71104d03b2f20ef2a2690142c590"} Jan 22 10:38:42 crc kubenswrapper[4975]: I0122 10:38:42.430065 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" event={"ID":"108c261b-ff05-4c03-aa80-5f8cb56daf16","Type":"ContainerStarted","Data":"311b9bc30dda5170b55a22ca0f15068e354cc23be7be77981ce04956f67d8fde"} Jan 22 10:38:42 crc kubenswrapper[4975]: I0122 10:38:42.754466 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:42 crc kubenswrapper[4975]: E0122 10:38:42.754569 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:43 crc kubenswrapper[4975]: I0122 10:38:43.753848 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:43 crc kubenswrapper[4975]: I0122 10:38:43.753860 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:43 crc kubenswrapper[4975]: E0122 10:38:43.754118 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:43 crc kubenswrapper[4975]: I0122 10:38:43.754172 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:43 crc kubenswrapper[4975]: E0122 10:38:43.754665 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:43 crc kubenswrapper[4975]: E0122 10:38:43.754864 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:43 crc kubenswrapper[4975]: I0122 10:38:43.755303 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:38:43 crc kubenswrapper[4975]: E0122 10:38:43.755591 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t5gcx_openshift-ovn-kubernetes(c1de894a-1cf5-47fa-9fc5-0a52783fe152)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" Jan 22 10:38:44 crc kubenswrapper[4975]: I0122 10:38:44.754531 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:44 crc kubenswrapper[4975]: E0122 10:38:44.754705 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.443527 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/1.log" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.444416 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/0.log" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.444521 4975 generic.go:334] "Generic (PLEG): container finished" podID="9fcb4bcb-0a79-4420-bb51-e4585d3306a9" containerID="fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608" exitCode=1 Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.444581 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerDied","Data":"fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608"} Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.444648 4975 scope.go:117] "RemoveContainer" containerID="6d16fcfef82be39151bdac26072177a99f52cb7363d5646afa5595f93bf64c0d" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.445290 4975 scope.go:117] "RemoveContainer" containerID="fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608" Jan 22 10:38:45 crc kubenswrapper[4975]: E0122 10:38:45.445668 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qmklh_openshift-multus(9fcb4bcb-0a79-4420-bb51-e4585d3306a9)\"" pod="openshift-multus/multus-qmklh" podUID="9fcb4bcb-0a79-4420-bb51-e4585d3306a9" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.470141 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6rtpc" podStartSLOduration=95.470108785 podStartE2EDuration="1m35.470108785s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:42.446970115 +0000 UTC m=+115.105006225" watchObservedRunningTime="2026-01-22 10:38:45.470108785 +0000 UTC m=+118.128144935" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.758628 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:45 crc kubenswrapper[4975]: E0122 10:38:45.758782 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.759036 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:45 crc kubenswrapper[4975]: E0122 10:38:45.759116 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:45 crc kubenswrapper[4975]: I0122 10:38:45.759311 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:45 crc kubenswrapper[4975]: E0122 10:38:45.759556 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:46 crc kubenswrapper[4975]: I0122 10:38:46.450672 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/1.log" Jan 22 10:38:46 crc kubenswrapper[4975]: I0122 10:38:46.754638 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:46 crc kubenswrapper[4975]: E0122 10:38:46.754835 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:47 crc kubenswrapper[4975]: I0122 10:38:47.754395 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:47 crc kubenswrapper[4975]: I0122 10:38:47.754520 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:47 crc kubenswrapper[4975]: E0122 10:38:47.756469 4975 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 10:38:47 crc kubenswrapper[4975]: E0122 10:38:47.756512 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:47 crc kubenswrapper[4975]: E0122 10:38:47.756713 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:47 crc kubenswrapper[4975]: I0122 10:38:47.756547 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:47 crc kubenswrapper[4975]: E0122 10:38:47.756889 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:47 crc kubenswrapper[4975]: E0122 10:38:47.861525 4975 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 10:38:48 crc kubenswrapper[4975]: I0122 10:38:48.754533 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:48 crc kubenswrapper[4975]: E0122 10:38:48.754899 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:49 crc kubenswrapper[4975]: I0122 10:38:49.753894 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:49 crc kubenswrapper[4975]: I0122 10:38:49.753962 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:49 crc kubenswrapper[4975]: I0122 10:38:49.753900 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:49 crc kubenswrapper[4975]: E0122 10:38:49.754098 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:49 crc kubenswrapper[4975]: E0122 10:38:49.754234 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:49 crc kubenswrapper[4975]: E0122 10:38:49.754453 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:50 crc kubenswrapper[4975]: I0122 10:38:50.754688 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:50 crc kubenswrapper[4975]: E0122 10:38:50.754946 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:51 crc kubenswrapper[4975]: I0122 10:38:51.754165 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:51 crc kubenswrapper[4975]: I0122 10:38:51.754266 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:51 crc kubenswrapper[4975]: I0122 10:38:51.754211 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:51 crc kubenswrapper[4975]: E0122 10:38:51.754404 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:51 crc kubenswrapper[4975]: E0122 10:38:51.754537 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:51 crc kubenswrapper[4975]: E0122 10:38:51.754680 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:52 crc kubenswrapper[4975]: I0122 10:38:52.754833 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:52 crc kubenswrapper[4975]: E0122 10:38:52.755052 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:52 crc kubenswrapper[4975]: E0122 10:38:52.863228 4975 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 10:38:53 crc kubenswrapper[4975]: I0122 10:38:53.754192 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:53 crc kubenswrapper[4975]: I0122 10:38:53.754223 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:53 crc kubenswrapper[4975]: E0122 10:38:53.754400 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:53 crc kubenswrapper[4975]: I0122 10:38:53.754464 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:53 crc kubenswrapper[4975]: E0122 10:38:53.754666 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:53 crc kubenswrapper[4975]: E0122 10:38:53.754786 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:54 crc kubenswrapper[4975]: I0122 10:38:54.754134 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:54 crc kubenswrapper[4975]: E0122 10:38:54.754374 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:54 crc kubenswrapper[4975]: I0122 10:38:54.760815 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.481556 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/3.log" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.484096 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerStarted","Data":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.484486 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.507897 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podStartSLOduration=105.507879121 podStartE2EDuration="1m45.507879121s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:55.507316096 +0000 UTC m=+128.165352226" watchObservedRunningTime="2026-01-22 10:38:55.507879121 +0000 UTC m=+128.165915241" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.753980 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.754072 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.753980 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:55 crc kubenswrapper[4975]: E0122 10:38:55.754225 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:55 crc kubenswrapper[4975]: E0122 10:38:55.754428 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:55 crc kubenswrapper[4975]: E0122 10:38:55.754627 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:55 crc kubenswrapper[4975]: I0122 10:38:55.961129 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v5jbd"] Jan 22 10:38:56 crc kubenswrapper[4975]: I0122 10:38:56.488299 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:56 crc kubenswrapper[4975]: E0122 10:38:56.488553 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:56 crc kubenswrapper[4975]: I0122 10:38:56.754268 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:56 crc kubenswrapper[4975]: E0122 10:38:56.754477 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:57 crc kubenswrapper[4975]: I0122 10:38:57.754481 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:57 crc kubenswrapper[4975]: I0122 10:38:57.754545 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:57 crc kubenswrapper[4975]: E0122 10:38:57.756313 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:38:57 crc kubenswrapper[4975]: E0122 10:38:57.756466 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:57 crc kubenswrapper[4975]: E0122 10:38:57.864789 4975 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 10:38:58 crc kubenswrapper[4975]: I0122 10:38:58.754164 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:38:58 crc kubenswrapper[4975]: I0122 10:38:58.754178 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:38:58 crc kubenswrapper[4975]: E0122 10:38:58.755055 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:38:58 crc kubenswrapper[4975]: E0122 10:38:58.755289 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:38:58 crc kubenswrapper[4975]: I0122 10:38:58.756029 4975 scope.go:117] "RemoveContainer" containerID="fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608" Jan 22 10:38:59 crc kubenswrapper[4975]: I0122 10:38:59.502822 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/1.log" Jan 22 10:38:59 crc kubenswrapper[4975]: I0122 10:38:59.503162 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerStarted","Data":"cb3fc758f4e3dafef1c65a6af58250e64c986ea64452a9ca79134a5387a92fb0"} Jan 22 10:38:59 crc kubenswrapper[4975]: I0122 10:38:59.754657 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:38:59 crc kubenswrapper[4975]: I0122 10:38:59.754680 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:38:59 crc kubenswrapper[4975]: E0122 10:38:59.754952 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:38:59 crc kubenswrapper[4975]: E0122 10:38:59.754944 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:39:00 crc kubenswrapper[4975]: I0122 10:39:00.754138 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:39:00 crc kubenswrapper[4975]: I0122 10:39:00.754183 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:00 crc kubenswrapper[4975]: E0122 10:39:00.754261 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:39:00 crc kubenswrapper[4975]: E0122 10:39:00.754304 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:39:01 crc kubenswrapper[4975]: I0122 10:39:01.754315 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:01 crc kubenswrapper[4975]: I0122 10:39:01.754323 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:39:01 crc kubenswrapper[4975]: E0122 10:39:01.754567 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:39:01 crc kubenswrapper[4975]: E0122 10:39:01.754798 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:39:02 crc kubenswrapper[4975]: I0122 10:39:02.753906 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:39:02 crc kubenswrapper[4975]: E0122 10:39:02.754095 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5jbd" podUID="50be08c7-0f3c-47b2-b15c-94dea396c9e1" Jan 22 10:39:02 crc kubenswrapper[4975]: I0122 10:39:02.754422 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:02 crc kubenswrapper[4975]: E0122 10:39:02.755502 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:39:03 crc kubenswrapper[4975]: I0122 10:39:03.753882 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:03 crc kubenswrapper[4975]: I0122 10:39:03.754017 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:39:03 crc kubenswrapper[4975]: I0122 10:39:03.756643 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 10:39:03 crc kubenswrapper[4975]: I0122 10:39:03.757076 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 10:39:04 crc kubenswrapper[4975]: I0122 10:39:04.754186 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:39:04 crc kubenswrapper[4975]: I0122 10:39:04.754394 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:04 crc kubenswrapper[4975]: I0122 10:39:04.757865 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 10:39:04 crc kubenswrapper[4975]: I0122 10:39:04.757885 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 10:39:04 crc kubenswrapper[4975]: I0122 10:39:04.758002 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 10:39:04 crc kubenswrapper[4975]: I0122 10:39:04.759137 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.650117 4975 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.708617 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-62jj5"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.709184 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.715211 4975 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.715274 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.716024 4975 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.716108 4975 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.716190 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.716216 4975 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.716275 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.716049 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.736253 4975 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.736537 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.737377 4975 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.737407 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.737489 4975 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.737507 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.737608 4975 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.737625 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.737645 4975 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.737662 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.737794 4975 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.737850 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: W0122 10:39:11.739898 4975 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 10:39:11 crc kubenswrapper[4975]: E0122 10:39:11.739934 4975 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.803417 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.804289 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck7qg"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.804847 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.805098 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.805595 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.806206 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.806374 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.806498 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.807122 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dfvnx"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.807312 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.807478 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.807449 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.808151 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ptf2x"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.808627 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.809599 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.809992 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.813017 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.813496 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.813929 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.816389 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42cdr"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.816935 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-prtg7"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.817132 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.817376 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h2jwh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.817710 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.817981 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819049 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819293 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819433 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819449 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819546 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819634 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819662 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819782 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819831 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819898 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819925 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820005 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820030 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820125 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820224 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820316 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820435 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820533 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.820669 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819640 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.819792 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821122 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821376 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821631 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821800 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821881 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821994 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.822076 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.822121 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.822217 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.822417 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.821815 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.822725 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.823559 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.823910 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.823934 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-94jvp"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824015 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824097 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824324 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824578 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824804 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824913 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.824989 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b9kn8"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.825454 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.825868 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.826582 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827074 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827154 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827557 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827570 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827752 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827801 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827944 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.827955 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.829988 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.830186 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.830354 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.833004 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.833540 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2jxxh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.833917 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.834487 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.834609 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.834570 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.836078 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.843762 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849073 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849113 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849137 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849159 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-node-pullsecrets\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849178 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849199 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849220 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849238 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit-dir\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849276 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849352 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmlk\" (UniqueName: \"kubernetes.io/projected/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-kube-api-access-xqmlk\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.849435 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-client\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.853190 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.853701 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.853921 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.854030 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.854149 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.853962 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.857272 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.857601 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.857759 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.857850 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.858043 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.858127 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.858805 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.858969 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.859792 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.857606 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.861956 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.865794 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.866077 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.865826 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.868001 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7fpw"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.869094 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z6j7l"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.870079 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.871153 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c629z"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.868450 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.865839 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.865904 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.866265 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.866290 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.866554 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.866825 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.866918 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.867456 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.867504 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.867637 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.867694 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.867738 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.868025 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.871232 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.871688 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.872576 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.872902 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.874207 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.876854 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.875985 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.877392 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.877442 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.878156 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.878274 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.878844 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.879111 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.879181 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.881404 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.883069 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.883563 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l942m"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.884023 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.884209 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.884672 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.885056 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.885733 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7pt74"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.886128 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.886210 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.886525 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.887108 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdlsh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.887633 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.890751 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.891537 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.891998 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.892488 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.892765 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.892946 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.894039 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.894734 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.907546 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.908145 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.909566 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.912446 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sk2bb"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.913204 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.928800 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.930852 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-62jj5"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.937373 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n4xzq"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.945820 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck7qg"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.946280 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.946312 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.946443 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.947323 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.948488 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dfvnx"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.949935 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.949975 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-service-ca-bundle\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950008 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-metrics-certs\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950061 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2b3377-4d92-4b44-bee1-338c9156c095-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950129 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd3bde3-2279-48fc-9962-33f16d29dab2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950161 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-client\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950205 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-client-ca\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950227 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhtd\" (UniqueName: \"kubernetes.io/projected/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-kube-api-access-zrhtd\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950447 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xs4g\" (UniqueName: \"kubernetes.io/projected/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-kube-api-access-7xs4g\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950499 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950525 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw6ll\" (UniqueName: \"kubernetes.io/projected/48c3f95e-78eb-4e10-bd32-4c4644190f01-kube-api-access-gw6ll\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950607 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-serving-cert\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950679 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/881a346a-95ca-4ef3-a824-b10d11a6b049-proxy-tls\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950736 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f4e900b-eda0-47d5-8c08-aca5c5468915-auth-proxy-config\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950762 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jpx\" (UniqueName: \"kubernetes.io/projected/81fdabfd-7410-4ca7-855b-c1764b58b808-kube-api-access-29jpx\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950810 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-etcd-client\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950837 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950899 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afc0a96d-0893-4c54-a6df-52f9f757f72e-serving-cert\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.950947 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7447dea-7435-4f66-94d2-19d28357acad-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zzh7w\" (UID: \"e7447dea-7435-4f66-94d2-19d28357acad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951102 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-stats-auth\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951129 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-config\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951160 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-trusted-ca-bundle\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951185 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951202 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951206 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951230 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e7efaac-9ce4-4487-9d6b-cd97ab115921-apiservice-cert\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951250 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd3bde3-2279-48fc-9962-33f16d29dab2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951271 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-config\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951292 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4bh\" (UniqueName: \"kubernetes.io/projected/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-kube-api-access-rj4bh\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951312 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951355 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951375 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-config\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951392 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-service-ca\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951446 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-serving-cert\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951543 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a85dd0b4-d50c-433c-a510-238a0f9a22bb-metrics-tls\") pod \"dns-operator-744455d44c-b9kn8\" (UID: \"a85dd0b4-d50c-433c-a510-238a0f9a22bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951568 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-policies\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951588 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951617 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-oauth-config\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951639 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/881a346a-95ca-4ef3-a824-b10d11a6b049-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.951693 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4e900b-eda0-47d5-8c08-aca5c5468915-config\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952445 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952461 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-client-ca\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952492 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952510 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-oauth-serving-cert\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952529 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af2b3377-4d92-4b44-bee1-338c9156c095-config\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952560 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-ca\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952576 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-service-ca\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952598 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952615 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952639 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c451d4b-5258-4a31-a438-d93a650f354f-serving-cert\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952666 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-proxy-tls\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952689 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-config\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952714 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952732 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/881a346a-95ca-4ef3-a824-b10d11a6b049-images\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952747 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c629z\" (UID: \"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952770 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952841 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-default-certificate\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952880 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-encryption-config\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952911 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952938 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.952966 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953016 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rqqb\" (UniqueName: \"kubernetes.io/projected/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-kube-api-access-5rqqb\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953052 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7n7\" (UniqueName: \"kubernetes.io/projected/5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f-kube-api-access-pm7n7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d6l9z\" (UID: \"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953084 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwdw\" (UniqueName: \"kubernetes.io/projected/afc0a96d-0893-4c54-a6df-52f9f757f72e-kube-api-access-nzwdw\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953107 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgfg\" (UniqueName: \"kubernetes.io/projected/5f4e900b-eda0-47d5-8c08-aca5c5468915-kube-api-access-rqgfg\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953132 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/befe9235-f8a1-4fda-ace8-2752310375e3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953154 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcrr\" (UniqueName: \"kubernetes.io/projected/befe9235-f8a1-4fda-ace8-2752310375e3-kube-api-access-jfcrr\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953190 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88f51483-70e8-478e-8082-77a1372f2e8f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953209 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953250 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd3bde3-2279-48fc-9962-33f16d29dab2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953276 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8v4\" (UniqueName: \"kubernetes.io/projected/88f51483-70e8-478e-8082-77a1372f2e8f-kube-api-access-qx8v4\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953303 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f51483-70e8-478e-8082-77a1372f2e8f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953356 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tlv\" (UniqueName: \"kubernetes.io/projected/9261771a-1df2-4fa4-a5e1-270602b5e2ad-kube-api-access-x9tlv\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953390 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcmb\" (UniqueName: \"kubernetes.io/projected/128e6487-d288-47e9-92db-491404d8ec40-kube-api-access-fgcmb\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953417 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953441 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm66p\" (UniqueName: \"kubernetes.io/projected/4e7efaac-9ce4-4487-9d6b-cd97ab115921-kube-api-access-jm66p\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953463 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953483 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69x9\" (UniqueName: \"kubernetes.io/projected/af2b3377-4d92-4b44-bee1-338c9156c095-kube-api-access-j69x9\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953503 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128e6487-d288-47e9-92db-491404d8ec40-serving-cert\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953533 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5f4e900b-eda0-47d5-8c08-aca5c5468915-machine-approver-tls\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953565 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45531d0c-698f-4013-9b53-ccfb8f76268f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953591 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953628 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9261771a-1df2-4fa4-a5e1-270602b5e2ad-config\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953654 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e7efaac-9ce4-4487-9d6b-cd97ab115921-tmpfs\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953693 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953749 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-srv-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953773 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzmz\" (UniqueName: \"kubernetes.io/projected/e7447dea-7435-4f66-94d2-19d28357acad-kube-api-access-2xzmz\") pod \"cluster-samples-operator-665b6dd947-zzh7w\" (UID: \"e7447dea-7435-4f66-94d2-19d28357acad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953796 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953822 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-config\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953848 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit-dir\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953869 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-dir\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953889 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953913 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2b3377-4d92-4b44-bee1-338c9156c095-images\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953938 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953959 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953914 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit-dir\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.953982 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxx6l\" (UniqueName: \"kubernetes.io/projected/62a0f3fd-9bc0-4505-999b-2cc156ebd430-kube-api-access-jxx6l\") pod \"downloads-7954f5f757-ptf2x\" (UID: \"62a0f3fd-9bc0-4505-999b-2cc156ebd430\") " pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954007 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tq6j\" (UniqueName: \"kubernetes.io/projected/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-kube-api-access-7tq6j\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954049 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmlk\" (UniqueName: \"kubernetes.io/projected/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-kube-api-access-xqmlk\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954072 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-serving-cert\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954094 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954116 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-service-ca-bundle\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954156 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-client\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954214 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbnh\" (UniqueName: \"kubernetes.io/projected/a85dd0b4-d50c-433c-a510-238a0f9a22bb-kube-api-access-shbnh\") pod \"dns-operator-744455d44c-b9kn8\" (UID: \"a85dd0b4-d50c-433c-a510-238a0f9a22bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954228 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c629z"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954241 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e7efaac-9ce4-4487-9d6b-cd97ab115921-webhook-cert\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954276 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954314 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d6l9z\" (UID: \"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954349 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954367 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954382 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jf4m\" (UniqueName: \"kubernetes.io/projected/45531d0c-698f-4013-9b53-ccfb8f76268f-kube-api-access-2jf4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954400 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-config\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954413 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45531d0c-698f-4013-9b53-ccfb8f76268f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954430 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954460 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954478 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-node-pullsecrets\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954538 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjhc\" (UniqueName: \"kubernetes.io/projected/20e6fd86-f979-409b-9577-6ce3a00ff73d-kube-api-access-xdjhc\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954557 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-node-pullsecrets\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954569 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwq8\" (UniqueName: \"kubernetes.io/projected/881a346a-95ca-4ef3-a824-b10d11a6b049-kube-api-access-5kwq8\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954596 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a11604f-1c29-4f85-8008-2883eaba3233-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954619 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgvj\" (UniqueName: \"kubernetes.io/projected/5a11604f-1c29-4f85-8008-2883eaba3233-kube-api-access-5jgvj\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954647 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9mxz\" (UniqueName: \"kubernetes.io/projected/20ca2f64-d03d-4d54-94b4-af04fcb499ce-kube-api-access-l9mxz\") pod \"migrator-59844c95c7-f246t\" (UID: \"20ca2f64-d03d-4d54-94b4-af04fcb499ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954670 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954689 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954712 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshnm\" (UniqueName: \"kubernetes.io/projected/a92ef0de-4810-4e00-8f34-90fcb13c236c-kube-api-access-dshnm\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954736 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6h4\" (UniqueName: \"kubernetes.io/projected/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-kube-api-access-pp6h4\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954774 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-audit-policies\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954834 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954858 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954888 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/befe9235-f8a1-4fda-ace8-2752310375e3-serving-cert\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954913 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm966\" (UniqueName: \"kubernetes.io/projected/cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1-kube-api-access-bm966\") pod \"multus-admission-controller-857f4d67dd-c629z\" (UID: \"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.954959 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-audit-dir\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.955002 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9261771a-1df2-4fa4-a5e1-270602b5e2ad-serving-cert\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.955037 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsvq\" (UniqueName: \"kubernetes.io/projected/3c451d4b-5258-4a31-a438-d93a650f354f-kube-api-access-gbsvq\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.956951 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.956976 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l942m"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.957683 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42cdr"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.958698 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b9kn8"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.960171 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.961208 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ptf2x"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.963089 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.964626 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.965591 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-prtg7"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.966658 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7fpw"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.967739 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z6j7l"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.968812 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.969869 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.971124 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7pt74"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.971490 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.971942 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.973024 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h2jwh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.974215 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdlsh"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.975188 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n6dj7"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.976389 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.976519 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.977653 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-94jvp"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.979323 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.980868 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n4xzq"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.983067 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.984403 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-68t4d"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.985568 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n6dj7"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.985673 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.986393 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.987494 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.988609 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sk2bb"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.989701 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.990783 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.991073 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.992089 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-68t4d"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.993229 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hgtw2"] Jan 22 10:39:11 crc kubenswrapper[4975]: I0122 10:39:11.993777 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.011858 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.031288 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.051355 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055718 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f51483-70e8-478e-8082-77a1372f2e8f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055761 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9tlv\" (UniqueName: \"kubernetes.io/projected/9261771a-1df2-4fa4-a5e1-270602b5e2ad-kube-api-access-x9tlv\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055788 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcmb\" (UniqueName: \"kubernetes.io/projected/128e6487-d288-47e9-92db-491404d8ec40-kube-api-access-fgcmb\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055813 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055838 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm66p\" (UniqueName: \"kubernetes.io/projected/4e7efaac-9ce4-4487-9d6b-cd97ab115921-kube-api-access-jm66p\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055859 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055881 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69x9\" (UniqueName: \"kubernetes.io/projected/af2b3377-4d92-4b44-bee1-338c9156c095-kube-api-access-j69x9\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055901 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128e6487-d288-47e9-92db-491404d8ec40-serving-cert\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055921 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5f4e900b-eda0-47d5-8c08-aca5c5468915-machine-approver-tls\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055944 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45531d0c-698f-4013-9b53-ccfb8f76268f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055973 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9261771a-1df2-4fa4-a5e1-270602b5e2ad-config\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.055993 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e7efaac-9ce4-4487-9d6b-cd97ab115921-tmpfs\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.056015 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.056040 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-srv-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.056915 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xzmz\" (UniqueName: \"kubernetes.io/projected/e7447dea-7435-4f66-94d2-19d28357acad-kube-api-access-2xzmz\") pod \"cluster-samples-operator-665b6dd947-zzh7w\" (UID: \"e7447dea-7435-4f66-94d2-19d28357acad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.056954 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.056980 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-config\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057010 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-dir\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057037 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057064 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2b3377-4d92-4b44-bee1-338c9156c095-images\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057090 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057112 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057140 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxx6l\" (UniqueName: \"kubernetes.io/projected/62a0f3fd-9bc0-4505-999b-2cc156ebd430-kube-api-access-jxx6l\") pod \"downloads-7954f5f757-ptf2x\" (UID: \"62a0f3fd-9bc0-4505-999b-2cc156ebd430\") " pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057172 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tq6j\" (UniqueName: \"kubernetes.io/projected/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-kube-api-access-7tq6j\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057366 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f51483-70e8-478e-8082-77a1372f2e8f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057408 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e7efaac-9ce4-4487-9d6b-cd97ab115921-tmpfs\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057745 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.057856 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-dir\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.058482 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-serving-cert\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.058715 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.058924 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-service-ca-bundle\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.058734 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af2b3377-4d92-4b44-bee1-338c9156c095-images\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.059281 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-client\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.059414 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbnh\" (UniqueName: \"kubernetes.io/projected/a85dd0b4-d50c-433c-a510-238a0f9a22bb-kube-api-access-shbnh\") pod \"dns-operator-744455d44c-b9kn8\" (UID: \"a85dd0b4-d50c-433c-a510-238a0f9a22bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.059525 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e7efaac-9ce4-4487-9d6b-cd97ab115921-webhook-cert\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.059787 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.059898 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d6l9z\" (UID: \"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060149 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060259 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-service-ca-bundle\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060264 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060348 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jf4m\" (UniqueName: \"kubernetes.io/projected/45531d0c-698f-4013-9b53-ccfb8f76268f-kube-api-access-2jf4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060648 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-config\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060680 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45531d0c-698f-4013-9b53-ccfb8f76268f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060747 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060781 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjhc\" (UniqueName: \"kubernetes.io/projected/20e6fd86-f979-409b-9577-6ce3a00ff73d-kube-api-access-xdjhc\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060847 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kwq8\" (UniqueName: \"kubernetes.io/projected/881a346a-95ca-4ef3-a824-b10d11a6b049-kube-api-access-5kwq8\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060878 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a11604f-1c29-4f85-8008-2883eaba3233-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060941 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgvj\" (UniqueName: \"kubernetes.io/projected/5a11604f-1c29-4f85-8008-2883eaba3233-kube-api-access-5jgvj\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.060971 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9mxz\" (UniqueName: \"kubernetes.io/projected/20ca2f64-d03d-4d54-94b4-af04fcb499ce-kube-api-access-l9mxz\") pod \"migrator-59844c95c7-f246t\" (UID: \"20ca2f64-d03d-4d54-94b4-af04fcb499ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061044 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dshnm\" (UniqueName: \"kubernetes.io/projected/a92ef0de-4810-4e00-8f34-90fcb13c236c-kube-api-access-dshnm\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061071 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6h4\" (UniqueName: \"kubernetes.io/projected/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-kube-api-access-pp6h4\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061094 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-audit-policies\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061158 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061196 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/befe9235-f8a1-4fda-ace8-2752310375e3-serving-cert\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061494 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm966\" (UniqueName: \"kubernetes.io/projected/cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1-kube-api-access-bm966\") pod \"multus-admission-controller-857f4d67dd-c629z\" (UID: \"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.061948 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-config\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.062058 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5f4e900b-eda0-47d5-8c08-aca5c5468915-machine-approver-tls\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.062100 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128e6487-d288-47e9-92db-491404d8ec40-serving-cert\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.062954 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.063423 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.063582 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-config\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.064501 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-audit-policies\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.065070 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.065660 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-client\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066105 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-audit-dir\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066219 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-audit-dir\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066444 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066536 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9261771a-1df2-4fa4-a5e1-270602b5e2ad-serving-cert\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066584 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbsvq\" (UniqueName: \"kubernetes.io/projected/3c451d4b-5258-4a31-a438-d93a650f354f-kube-api-access-gbsvq\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066613 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-service-ca-bundle\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.067480 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.067781 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0a96d-0893-4c54-a6df-52f9f757f72e-service-ca-bundle\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.068274 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/befe9235-f8a1-4fda-ace8-2752310375e3-serving-cert\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.068464 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.066651 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-metrics-certs\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.068691 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2b3377-4d92-4b44-bee1-338c9156c095-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.068797 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd3bde3-2279-48fc-9962-33f16d29dab2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.068916 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-client-ca\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069018 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrhtd\" (UniqueName: \"kubernetes.io/projected/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-kube-api-access-zrhtd\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069136 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xs4g\" (UniqueName: \"kubernetes.io/projected/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-kube-api-access-7xs4g\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069243 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069367 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw6ll\" (UniqueName: \"kubernetes.io/projected/48c3f95e-78eb-4e10-bd32-4c4644190f01-kube-api-access-gw6ll\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069483 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-serving-cert\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069610 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/881a346a-95ca-4ef3-a824-b10d11a6b049-proxy-tls\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069726 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f4e900b-eda0-47d5-8c08-aca5c5468915-auth-proxy-config\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069855 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29jpx\" (UniqueName: \"kubernetes.io/projected/81fdabfd-7410-4ca7-855b-c1764b58b808-kube-api-access-29jpx\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.069967 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-etcd-client\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070096 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afc0a96d-0893-4c54-a6df-52f9f757f72e-serving-cert\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070266 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7447dea-7435-4f66-94d2-19d28357acad-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zzh7w\" (UID: \"e7447dea-7435-4f66-94d2-19d28357acad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070407 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-stats-auth\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070524 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-config\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070645 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-trusted-ca-bundle\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070751 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070869 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070992 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e7efaac-9ce4-4487-9d6b-cd97ab115921-apiservice-cert\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071098 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd3bde3-2279-48fc-9962-33f16d29dab2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071201 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-config\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071300 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4bh\" (UniqueName: \"kubernetes.io/projected/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-kube-api-access-rj4bh\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071412 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071530 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071667 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-config\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070566 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-client-ca\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071764 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-service-ca\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071848 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-serving-cert\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071885 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a85dd0b4-d50c-433c-a510-238a0f9a22bb-metrics-tls\") pod \"dns-operator-744455d44c-b9kn8\" (UID: \"a85dd0b4-d50c-433c-a510-238a0f9a22bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071920 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-policies\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.071989 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072023 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-oauth-config\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072079 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/881a346a-95ca-4ef3-a824-b10d11a6b049-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072110 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4e900b-eda0-47d5-8c08-aca5c5468915-config\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072168 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-client-ca\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072203 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.070541 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-metrics-certs\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072257 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-oauth-serving-cert\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.072286 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af2b3377-4d92-4b44-bee1-338c9156c095-config\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073066 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-ca\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073114 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-service-ca\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073144 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073177 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073204 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c451d4b-5258-4a31-a438-d93a650f354f-serving-cert\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073234 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-proxy-tls\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073264 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-config\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073293 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073319 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/881a346a-95ca-4ef3-a824-b10d11a6b049-images\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073381 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c629z\" (UID: \"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073417 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073448 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-default-certificate\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073478 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-encryption-config\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073515 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073543 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073597 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rqqb\" (UniqueName: \"kubernetes.io/projected/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-kube-api-access-5rqqb\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073626 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7n7\" (UniqueName: \"kubernetes.io/projected/5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f-kube-api-access-pm7n7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d6l9z\" (UID: \"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073656 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwdw\" (UniqueName: \"kubernetes.io/projected/afc0a96d-0893-4c54-a6df-52f9f757f72e-kube-api-access-nzwdw\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073683 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgfg\" (UniqueName: \"kubernetes.io/projected/5f4e900b-eda0-47d5-8c08-aca5c5468915-kube-api-access-rqgfg\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073712 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/befe9235-f8a1-4fda-ace8-2752310375e3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073737 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcrr\" (UniqueName: \"kubernetes.io/projected/befe9235-f8a1-4fda-ace8-2752310375e3-kube-api-access-jfcrr\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073767 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88f51483-70e8-478e-8082-77a1372f2e8f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073805 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd3bde3-2279-48fc-9962-33f16d29dab2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073845 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8v4\" (UniqueName: \"kubernetes.io/projected/88f51483-70e8-478e-8082-77a1372f2e8f-kube-api-access-qx8v4\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.073924 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.074063 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4e900b-eda0-47d5-8c08-aca5c5468915-config\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.074943 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f4e900b-eda0-47d5-8c08-aca5c5468915-auth-proxy-config\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.075407 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-client-ca\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.076573 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a85dd0b4-d50c-433c-a510-238a0f9a22bb-metrics-tls\") pod \"dns-operator-744455d44c-b9kn8\" (UID: \"a85dd0b4-d50c-433c-a510-238a0f9a22bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.077228 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-policies\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.079189 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-config\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.079946 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-config\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.080163 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/881a346a-95ca-4ef3-a824-b10d11a6b049-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.080264 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/befe9235-f8a1-4fda-ace8-2752310375e3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.083061 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-service-ca\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.081580 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af2b3377-4d92-4b44-bee1-338c9156c095-config\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.080813 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3c451d4b-5258-4a31-a438-d93a650f354f-etcd-ca\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084075 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-default-certificate\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084164 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084352 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afc0a96d-0893-4c54-a6df-52f9f757f72e-serving-cert\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084515 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-stats-auth\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084665 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084709 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c451d4b-5258-4a31-a438-d93a650f354f-serving-cert\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084790 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084833 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-etcd-client\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.084826 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-serving-cert\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.085005 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7447dea-7435-4f66-94d2-19d28357acad-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zzh7w\" (UID: \"e7447dea-7435-4f66-94d2-19d28357acad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.085884 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.085898 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/af2b3377-4d92-4b44-bee1-338c9156c095-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.086753 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-serving-cert\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.087588 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.089590 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.091615 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-encryption-config\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.096399 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88f51483-70e8-478e-8082-77a1372f2e8f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.109821 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.115485 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.117487 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.121934 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.132104 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.151689 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.154519 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.171088 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.177780 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.197914 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.200998 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.211441 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.232671 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.251744 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.272649 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.292290 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.312307 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.340618 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.352742 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.372377 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.391849 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.404289 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c629z\" (UID: \"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.412466 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.432012 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.442703 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-proxy-tls\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.451656 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.472059 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.492266 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.512269 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.532002 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.551481 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.572451 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.574951 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45531d0c-698f-4013-9b53-ccfb8f76268f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.591415 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.603694 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45531d0c-698f-4013-9b53-ccfb8f76268f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.612273 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.630682 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.651780 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.656123 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d6l9z\" (UID: \"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.672297 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.683434 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.691690 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.711810 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.731805 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.734194 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-config\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.752518 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.759910 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/881a346a-95ca-4ef3-a824-b10d11a6b049-images\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.772806 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.792737 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.806142 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/881a346a-95ca-4ef3-a824-b10d11a6b049-proxy-tls\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.811780 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.832439 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.852170 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.861851 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd3bde3-2279-48fc-9962-33f16d29dab2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.872986 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.884101 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd3bde3-2279-48fc-9962-33f16d29dab2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.889848 4975 request.go:700] Waited for 1.004548165s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.891673 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.912011 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.920001 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e7efaac-9ce4-4487-9d6b-cd97ab115921-apiservice-cert\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.926809 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e7efaac-9ce4-4487-9d6b-cd97ab115921-webhook-cert\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.932190 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.950510 4975 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.950633 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-client podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.450602507 +0000 UTC m=+146.108638657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-client") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.951026 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.951131 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.451104773 +0000 UTC m=+146.109141003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.952588 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.953584 4975 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.953755 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.453723142 +0000 UTC m=+146.111759252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.953924 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.954048 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.45400172 +0000 UTC m=+146.112038060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.954986 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955109 4975 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955031 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955150 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.455097442 +0000 UTC m=+146.113133582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955215 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.455175005 +0000 UTC m=+146.113211105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955054 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955243 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.455230496 +0000 UTC m=+146.113266606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: E0122 10:39:12.955309 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.455277038 +0000 UTC m=+146.113313188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.972039 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.984804 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-serving-cert\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.991708 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 10:39:12 crc kubenswrapper[4975]: I0122 10:39:12.996320 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-service-ca\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.021617 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.057367 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.058454 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-trusted-ca-bundle\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.057641 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.059627 4975 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.059830 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-srv-cert podName:ecefe468-0f5a-4b37-99e2-a09d6f59f7fd nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.559797972 +0000 UTC m=+146.217834092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-srv-cert") pod "catalog-operator-68c6474976-qw87d" (UID: "ecefe468-0f5a-4b37-99e2-a09d6f59f7fd") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.059648 4975 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.060113 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9261771a-1df2-4fa4-a5e1-270602b5e2ad-config podName:9261771a-1df2-4fa4-a5e1-270602b5e2ad nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.560096322 +0000 UTC m=+146.218132652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9261771a-1df2-4fa4-a5e1-270602b5e2ad-config") pod "service-ca-operator-777779d784-l6m7w" (UID: "9261771a-1df2-4fa4-a5e1-270602b5e2ad") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.062554 4975 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.062608 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a11604f-1c29-4f85-8008-2883eaba3233-package-server-manager-serving-cert podName:5a11604f-1c29-4f85-8008-2883eaba3233 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.562591736 +0000 UTC m=+146.220627856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/5a11604f-1c29-4f85-8008-2883eaba3233-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-2ncxd" (UID: "5a11604f-1c29-4f85-8008-2883eaba3233") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.063781 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-config\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.067317 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-oauth-serving-cert\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.067483 4975 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.067573 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9261771a-1df2-4fa4-a5e1-270602b5e2ad-serving-cert podName:9261771a-1df2-4fa4-a5e1-270602b5e2ad nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.567545946 +0000 UTC m=+146.225582286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9261771a-1df2-4fa4-a5e1-270602b5e2ad-serving-cert") pod "service-ca-operator-777779d784-l6m7w" (UID: "9261771a-1df2-4fa4-a5e1-270602b5e2ad") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.071834 4975 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.072014 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca podName:48c3f95e-78eb-4e10-bd32-4c4644190f01 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.57199096 +0000 UTC m=+146.230027270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca") pod "marketplace-operator-79b997595-gdlsh" (UID: "48c3f95e-78eb-4e10-bd32-4c4644190f01") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.072167 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.072372 4975 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.072471 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume podName:81fdabfd-7410-4ca7-855b-c1764b58b808 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.572454614 +0000 UTC m=+146.230490734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume") pod "collect-profiles-29484630-xqs7z" (UID: "81fdabfd-7410-4ca7-855b-c1764b58b808") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.073589 4975 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.073715 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume podName:81fdabfd-7410-4ca7-855b-c1764b58b808 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.57368755 +0000 UTC m=+146.231723880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume") pod "collect-profiles-29484630-xqs7z" (UID: "81fdabfd-7410-4ca7-855b-c1764b58b808") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.075859 4975 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.075955 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics podName:48c3f95e-78eb-4e10-bd32-4c4644190f01 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.575908487 +0000 UTC m=+146.233944807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics") pod "marketplace-operator-79b997595-gdlsh" (UID: "48c3f95e-78eb-4e10-bd32-4c4644190f01") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.078935 4975 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: E0122 10:39:13.079101 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-profile-collector-cert podName:ecefe468-0f5a-4b37-99e2-a09d6f59f7fd nodeName:}" failed. No retries permitted until 2026-01-22 10:39:13.579085963 +0000 UTC m=+146.237122083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-profile-collector-cert") pod "catalog-operator-68c6474976-qw87d" (UID: "ecefe468-0f5a-4b37-99e2-a09d6f59f7fd") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.080136 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-oauth-config\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.092796 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.112308 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.132103 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.153180 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.182389 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.191808 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.212686 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.232678 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.252112 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.272224 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.292622 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.311830 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.332323 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.352315 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.371712 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.392476 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.433413 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.452483 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.472423 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.492176 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.510892 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.511099 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.511266 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.511521 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.511639 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.511731 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.511779 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.512247 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-client\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.512523 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.532322 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.572814 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.592082 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.612507 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.613396 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.613487 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.613578 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.613781 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9261771a-1df2-4fa4-a5e1-270602b5e2ad-config\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.613825 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-srv-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.614025 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a11604f-1c29-4f85-8008-2883eaba3233-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.614180 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9261771a-1df2-4fa4-a5e1-270602b5e2ad-serving-cert\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.614280 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.614400 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.615118 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.615180 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9261771a-1df2-4fa4-a5e1-270602b5e2ad-config\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.617780 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.619256 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.619806 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-srv-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.620093 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.620675 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9261771a-1df2-4fa4-a5e1-270602b5e2ad-serving-cert\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.621738 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a11604f-1c29-4f85-8008-2883eaba3233-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.622585 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.633272 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.672848 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.692530 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.711885 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.732290 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.752111 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.774942 4975 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.791619 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.812126 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.832067 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.898095 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcmb\" (UniqueName: \"kubernetes.io/projected/128e6487-d288-47e9-92db-491404d8ec40-kube-api-access-fgcmb\") pod \"controller-manager-879f6c89f-ck7qg\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.898556 4975 request.go:700] Waited for 1.841844517s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.901587 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm66p\" (UniqueName: \"kubernetes.io/projected/4e7efaac-9ce4-4487-9d6b-cd97ab115921-kube-api-access-jm66p\") pod \"packageserver-d55dfcdfc-ff77j\" (UID: \"4e7efaac-9ce4-4487-9d6b-cd97ab115921\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.928446 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9tlv\" (UniqueName: \"kubernetes.io/projected/9261771a-1df2-4fa4-a5e1-270602b5e2ad-kube-api-access-x9tlv\") pod \"service-ca-operator-777779d784-l6m7w\" (UID: \"9261771a-1df2-4fa4-a5e1-270602b5e2ad\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.937453 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.938646 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69x9\" (UniqueName: \"kubernetes.io/projected/af2b3377-4d92-4b44-bee1-338c9156c095-kube-api-access-j69x9\") pod \"machine-api-operator-5694c8668f-prtg7\" (UID: \"af2b3377-4d92-4b44-bee1-338c9156c095\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.958494 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.965762 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:13 crc kubenswrapper[4975]: I0122 10:39:13.982498 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c11e90-4ee6-4534-ae9b-8e62ec35d08e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z5l8p\" (UID: \"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.001930 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxx6l\" (UniqueName: \"kubernetes.io/projected/62a0f3fd-9bc0-4505-999b-2cc156ebd430-kube-api-access-jxx6l\") pod \"downloads-7954f5f757-ptf2x\" (UID: \"62a0f3fd-9bc0-4505-999b-2cc156ebd430\") " pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.005873 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.036170 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tq6j\" (UniqueName: \"kubernetes.io/projected/56ade85a-531d-4cc3-bb1c-9e8d6fc6a815-kube-api-access-7tq6j\") pod \"apiserver-7bbb656c7d-5vjmf\" (UID: \"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.036223 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xzmz\" (UniqueName: \"kubernetes.io/projected/e7447dea-7435-4f66-94d2-19d28357acad-kube-api-access-2xzmz\") pod \"cluster-samples-operator-665b6dd947-zzh7w\" (UID: \"e7447dea-7435-4f66-94d2-19d28357acad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.040947 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.052290 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbnh\" (UniqueName: \"kubernetes.io/projected/a85dd0b4-d50c-433c-a510-238a0f9a22bb-kube-api-access-shbnh\") pod \"dns-operator-744455d44c-b9kn8\" (UID: \"a85dd0b4-d50c-433c-a510-238a0f9a22bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.057239 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.075192 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.077240 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jf4m\" (UniqueName: \"kubernetes.io/projected/45531d0c-698f-4013-9b53-ccfb8f76268f-kube-api-access-2jf4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-zc7v2\" (UID: \"45531d0c-698f-4013-9b53-ccfb8f76268f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.097668 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.114033 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kwq8\" (UniqueName: \"kubernetes.io/projected/881a346a-95ca-4ef3-a824-b10d11a6b049-kube-api-access-5kwq8\") pod \"machine-config-operator-74547568cd-l942m\" (UID: \"881a346a-95ca-4ef3-a824-b10d11a6b049\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.125143 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6h4\" (UniqueName: \"kubernetes.io/projected/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-kube-api-access-pp6h4\") pod \"route-controller-manager-6576b87f9c-6csc4\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.129100 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjhc\" (UniqueName: \"kubernetes.io/projected/20e6fd86-f979-409b-9577-6ce3a00ff73d-kube-api-access-xdjhc\") pod \"oauth-openshift-558db77b4-c7fpw\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.148502 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.156465 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9mxz\" (UniqueName: \"kubernetes.io/projected/20ca2f64-d03d-4d54-94b4-af04fcb499ce-kube-api-access-l9mxz\") pod \"migrator-59844c95c7-f246t\" (UID: \"20ca2f64-d03d-4d54-94b4-af04fcb499ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.177405 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dshnm\" (UniqueName: \"kubernetes.io/projected/a92ef0de-4810-4e00-8f34-90fcb13c236c-kube-api-access-dshnm\") pod \"console-f9d7485db-7pt74\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.193929 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgvj\" (UniqueName: \"kubernetes.io/projected/5a11604f-1c29-4f85-8008-2883eaba3233-kube-api-access-5jgvj\") pod \"package-server-manager-789f6589d5-2ncxd\" (UID: \"5a11604f-1c29-4f85-8008-2883eaba3233\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.217144 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck7qg"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.220975 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm966\" (UniqueName: \"kubernetes.io/projected/cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1-kube-api-access-bm966\") pod \"multus-admission-controller-857f4d67dd-c629z\" (UID: \"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.226231 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbsvq\" (UniqueName: \"kubernetes.io/projected/3c451d4b-5258-4a31-a438-d93a650f354f-kube-api-access-gbsvq\") pod \"etcd-operator-b45778765-h2jwh\" (UID: \"3c451d4b-5258-4a31-a438-d93a650f354f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.233791 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" Jan 22 10:39:14 crc kubenswrapper[4975]: W0122 10:39:14.243996 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod128e6487_d288_47e9_92db_491404d8ec40.slice/crio-5771e6f435334ee7735999084bf34306b514812b4df6ff62d40ae21cdfaa79d8 WatchSource:0}: Error finding container 5771e6f435334ee7735999084bf34306b514812b4df6ff62d40ae21cdfaa79d8: Status 404 returned error can't find the container with id 5771e6f435334ee7735999084bf34306b514812b4df6ff62d40ae21cdfaa79d8 Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.244287 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrhtd\" (UniqueName: \"kubernetes.io/projected/3ade24ae-bd68-4ce7-ae8c-de8f35a850d9-kube-api-access-zrhtd\") pod \"router-default-5444994796-2jxxh\" (UID: \"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9\") " pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.246282 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.251929 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.251995 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.273568 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.274655 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xs4g\" (UniqueName: \"kubernetes.io/projected/b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16-kube-api-access-7xs4g\") pod \"machine-config-controller-84d6567774-5cbp9\" (UID: \"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.286340 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.289768 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw6ll\" (UniqueName: \"kubernetes.io/projected/48c3f95e-78eb-4e10-bd32-4c4644190f01-kube-api-access-gw6ll\") pod \"marketplace-operator-79b997595-gdlsh\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.293411 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:14 crc kubenswrapper[4975]: W0122 10:39:14.294076 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e7efaac_9ce4_4487_9d6b_cd97ab115921.slice/crio-31c84e156188ad953c05a43cf079483298747583212b5839c95670a3920c9f0d WatchSource:0}: Error finding container 31c84e156188ad953c05a43cf079483298747583212b5839c95670a3920c9f0d: Status 404 returned error can't find the container with id 31c84e156188ad953c05a43cf079483298747583212b5839c95670a3920c9f0d Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.306904 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29jpx\" (UniqueName: \"kubernetes.io/projected/81fdabfd-7410-4ca7-855b-c1764b58b808-kube-api-access-29jpx\") pod \"collect-profiles-29484630-xqs7z\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.331506 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4bh\" (UniqueName: \"kubernetes.io/projected/7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2-kube-api-access-rj4bh\") pod \"ingress-operator-5b745b69d9-v7hww\" (UID: \"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.331852 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.348613 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8v4\" (UniqueName: \"kubernetes.io/projected/88f51483-70e8-478e-8082-77a1372f2e8f-kube-api-access-qx8v4\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxzg4\" (UID: \"88f51483-70e8-478e-8082-77a1372f2e8f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.366583 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwdw\" (UniqueName: \"kubernetes.io/projected/afc0a96d-0893-4c54-a6df-52f9f757f72e-kube-api-access-nzwdw\") pod \"authentication-operator-69f744f599-dfvnx\" (UID: \"afc0a96d-0893-4c54-a6df-52f9f757f72e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.367207 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.387903 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd3bde3-2279-48fc-9962-33f16d29dab2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lbqkh\" (UID: \"4fd3bde3-2279-48fc-9962-33f16d29dab2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.403679 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.407294 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rqqb\" (UniqueName: \"kubernetes.io/projected/ecefe468-0f5a-4b37-99e2-a09d6f59f7fd-kube-api-access-5rqqb\") pod \"catalog-operator-68c6474976-qw87d\" (UID: \"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.410916 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.420536 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.425172 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.440278 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7n7\" (UniqueName: \"kubernetes.io/projected/5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f-kube-api-access-pm7n7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d6l9z\" (UID: \"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.446372 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.452919 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgfg\" (UniqueName: \"kubernetes.io/projected/5f4e900b-eda0-47d5-8c08-aca5c5468915-kube-api-access-rqgfg\") pod \"machine-approver-56656f9798-6p98b\" (UID: \"5f4e900b-eda0-47d5-8c08-aca5c5468915\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.467383 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcrr\" (UniqueName: \"kubernetes.io/projected/befe9235-f8a1-4fda-ace8-2752310375e3-kube-api-access-jfcrr\") pod \"openshift-config-operator-7777fb866f-42cdr\" (UID: \"befe9235-f8a1-4fda-ace8-2752310375e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.471765 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.475631 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-client\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.491571 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.511837 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.511917 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.511899163 +0000 UTC m=+148.169935273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.511970 4975 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.511997 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.511988395 +0000 UTC m=+148.170024505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512010 4975 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512057 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.512048267 +0000 UTC m=+148.170084377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512083 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512101 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.512096049 +0000 UTC m=+148.170132149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.512368 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512582 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512633 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.512625454 +0000 UTC m=+148.170661564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512657 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512702 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.512695376 +0000 UTC m=+148.170731486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512726 4975 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.512743 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config podName:d4d8cf2a-15db-4bcc-b80c-9e9a273fca82 nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.512738588 +0000 UTC m=+148.170774698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config") pod "apiserver-76f77b778f-62jj5" (UID: "d4d8cf2a-15db-4bcc-b80c-9e9a273fca82") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.517913 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.522195 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.531164 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.541010 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.546839 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ptf2x"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.556513 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.559159 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.559639 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.561974 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2jxxh" event={"ID":"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9","Type":"ContainerStarted","Data":"c105f6be484cf450d77555ca9094889ec771f693da730d4b23167219518b0084"} Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.566247 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" event={"ID":"4e7efaac-9ce4-4487-9d6b-cd97ab115921","Type":"ContainerStarted","Data":"31c84e156188ad953c05a43cf079483298747583212b5839c95670a3920c9f0d"} Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.569288 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" event={"ID":"128e6487-d288-47e9-92db-491404d8ec40","Type":"ContainerStarted","Data":"ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5"} Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.569462 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" event={"ID":"128e6487-d288-47e9-92db-491404d8ec40","Type":"ContainerStarted","Data":"5771e6f435334ee7735999084bf34306b514812b4df6ff62d40ae21cdfaa79d8"} Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.569943 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.571295 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.575997 4975 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ck7qg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.576055 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" podUID="128e6487-d288-47e9-92db-491404d8ec40" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.585077 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.585150 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.591033 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.594987 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.596940 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.600685 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b9kn8"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.604677 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmlk\" (UniqueName: \"kubernetes.io/projected/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-kube-api-access-xqmlk\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.605582 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-prtg7"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.612485 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.612596 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.633096 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.652949 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.661193 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.672527 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.692214 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.699485 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7fpw"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732338 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.732470 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:41:16.732441978 +0000 UTC m=+269.390478088 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732522 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6141abb-1a67-4419-b0a5-4796b4af2794-config\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732565 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeb6317-904e-4fae-90f6-a8169f8f7b61-serving-cert\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732618 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732650 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62tn\" (UniqueName: \"kubernetes.io/projected/19947433-0cc9-4d2c-87e0-77440921e949-kube-api-access-n62tn\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732739 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19947433-0cc9-4d2c-87e0-77440921e949-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732794 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-trusted-ca\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732827 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19947433-0cc9-4d2c-87e0-77440921e949-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732886 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-tls\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732902 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/588b9cff-3186-4fc2-9498-90fd2272fb66-ca-trust-extracted\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732918 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffde48e3-a9b2-4616-b37b-29450881e65c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.732982 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733003 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8v7\" (UniqueName: \"kubernetes.io/projected/ffde48e3-a9b2-4616-b37b-29450881e65c-kube-api-access-mg8v7\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733022 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-certificates\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733039 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19947433-0cc9-4d2c-87e0-77440921e949-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733055 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l48b\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-kube-api-access-4l48b\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733093 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/588b9cff-3186-4fc2-9498-90fd2272fb66-installation-pull-secrets\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733108 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6141abb-1a67-4419-b0a5-4796b4af2794-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733145 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733161 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeb6317-904e-4fae-90f6-a8169f8f7b61-config\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733183 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6141abb-1a67-4419-b0a5-4796b4af2794-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733198 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cj5v\" (UniqueName: \"kubernetes.io/projected/eaeb6317-904e-4fae-90f6-a8169f8f7b61-kube-api-access-6cj5v\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733226 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eaeb6317-904e-4fae-90f6-a8169f8f7b61-trusted-ca\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733240 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffde48e3-a9b2-4616-b37b-29450881e65c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733274 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-bound-sa-token\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.733270 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.734004 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.233991915 +0000 UTC m=+147.892028025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.745765 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.776626 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l942m"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.786532 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.835664 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.835774 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.335748306 +0000 UTC m=+147.993784416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.835941 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62tn\" (UniqueName: \"kubernetes.io/projected/19947433-0cc9-4d2c-87e0-77440921e949-kube-api-access-n62tn\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.835965 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19947433-0cc9-4d2c-87e0-77440921e949-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836010 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-trusted-ca\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836073 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19947433-0cc9-4d2c-87e0-77440921e949-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836092 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b4c680b-2a21-42d7-8362-9a240b75e000-signing-key\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836165 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-tls\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836184 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/588b9cff-3186-4fc2-9498-90fd2272fb66-ca-trust-extracted\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836200 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b4c680b-2a21-42d7-8362-9a240b75e000-signing-cabundle\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836225 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffde48e3-a9b2-4616-b37b-29450881e65c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836436 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bhs\" (UniqueName: \"kubernetes.io/projected/1b4c680b-2a21-42d7-8362-9a240b75e000-kube-api-access-s4bhs\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836492 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445d4b66-b926-4822-83bc-13ff2ceb19ea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836518 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg8v7\" (UniqueName: \"kubernetes.io/projected/ffde48e3-a9b2-4616-b37b-29450881e65c-kube-api-access-mg8v7\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836535 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/134e44c3-9a68-4785-b29c-ed473a0febc6-cert\") pod \"ingress-canary-n4xzq\" (UID: \"134e44c3-9a68-4785-b29c-ed473a0febc6\") " pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836581 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-certificates\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836597 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kl7f\" (UniqueName: \"kubernetes.io/projected/445d4b66-b926-4822-83bc-13ff2ceb19ea-kube-api-access-9kl7f\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836622 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d677fccc-b7e9-4850-83af-2246938ebf69-metrics-tls\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836639 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19947433-0cc9-4d2c-87e0-77440921e949-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836666 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l48b\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-kube-api-access-4l48b\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836779 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/588b9cff-3186-4fc2-9498-90fd2272fb66-installation-pull-secrets\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836825 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6141abb-1a67-4419-b0a5-4796b4af2794-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836872 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aabe738d-7f14-493c-93bd-a92ec26d99a7-certs\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.836907 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-socket-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837087 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837117 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeb6317-904e-4fae-90f6-a8169f8f7b61-config\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837185 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvmh\" (UniqueName: \"kubernetes.io/projected/aabe738d-7f14-493c-93bd-a92ec26d99a7-kube-api-access-rqvmh\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837301 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6141abb-1a67-4419-b0a5-4796b4af2794-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837592 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445d4b66-b926-4822-83bc-13ff2ceb19ea-srv-cert\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837627 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cj5v\" (UniqueName: \"kubernetes.io/projected/eaeb6317-904e-4fae-90f6-a8169f8f7b61-kube-api-access-6cj5v\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837679 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aabe738d-7f14-493c-93bd-a92ec26d99a7-node-bootstrap-token\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837714 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837831 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eaeb6317-904e-4fae-90f6-a8169f8f7b61-trusted-ca\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837866 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffde48e3-a9b2-4616-b37b-29450881e65c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837920 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdc98\" (UniqueName: \"kubernetes.io/projected/d677fccc-b7e9-4850-83af-2246938ebf69-kube-api-access-wdc98\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.837952 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-registration-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838190 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-bound-sa-token\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838241 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838274 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-csi-data-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838379 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5ww\" (UniqueName: \"kubernetes.io/projected/134e44c3-9a68-4785-b29c-ed473a0febc6-kube-api-access-bh5ww\") pod \"ingress-canary-n4xzq\" (UID: \"134e44c3-9a68-4785-b29c-ed473a0febc6\") " pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838427 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-mountpoint-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838708 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6141abb-1a67-4419-b0a5-4796b4af2794-config\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838763 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeb6317-904e-4fae-90f6-a8169f8f7b61-serving-cert\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838801 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-plugins-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838818 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d677fccc-b7e9-4850-83af-2246938ebf69-config-volume\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.838928 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m8b\" (UniqueName: \"kubernetes.io/projected/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-kube-api-access-v7m8b\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.859071 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19947433-0cc9-4d2c-87e0-77440921e949-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.860027 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-tls\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.860308 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/588b9cff-3186-4fc2-9498-90fd2272fb66-ca-trust-extracted\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.861623 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-certificates\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.862876 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6141abb-1a67-4419-b0a5-4796b4af2794-config\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.864095 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19947433-0cc9-4d2c-87e0-77440921e949-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.865752 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/588b9cff-3186-4fc2-9498-90fd2272fb66-installation-pull-secrets\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.866034 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeb6317-904e-4fae-90f6-a8169f8f7b61-config\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.866158 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eaeb6317-904e-4fae-90f6-a8169f8f7b61-trusted-ca\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.866270 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.366256184 +0000 UTC m=+148.024292294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.866698 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffde48e3-a9b2-4616-b37b-29450881e65c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.867952 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-trusted-ca\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.870283 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeb6317-904e-4fae-90f6-a8169f8f7b61-serving-cert\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.872036 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffde48e3-a9b2-4616-b37b-29450881e65c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.872634 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6141abb-1a67-4419-b0a5-4796b4af2794-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.877961 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.885062 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.890030 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.894798 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l48b\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-kube-api-access-4l48b\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:14 crc kubenswrapper[4975]: W0122 10:39:14.908486 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6c11e90_4ee6_4534_ae9b_8e62ec35d08e.slice/crio-8f95d4b75fd5a4ecaabe239b34326647230c715eb6bd8d4f5476b1e4a9548030 WatchSource:0}: Error finding container 8f95d4b75fd5a4ecaabe239b34326647230c715eb6bd8d4f5476b1e4a9548030: Status 404 returned error can't find the container with id 8f95d4b75fd5a4ecaabe239b34326647230c715eb6bd8d4f5476b1e4a9548030 Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.912609 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.916116 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.918512 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7pt74"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.920269 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h2jwh"] Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.933591 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62tn\" (UniqueName: \"kubernetes.io/projected/19947433-0cc9-4d2c-87e0-77440921e949-kube-api-access-n62tn\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.938247 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6141abb-1a67-4419-b0a5-4796b4af2794-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qqvqj\" (UID: \"f6141abb-1a67-4419-b0a5-4796b4af2794\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.939700 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.939869 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-socket-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.939939 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aabe738d-7f14-493c-93bd-a92ec26d99a7-certs\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.939976 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvmh\" (UniqueName: \"kubernetes.io/projected/aabe738d-7f14-493c-93bd-a92ec26d99a7-kube-api-access-rqvmh\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940007 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445d4b66-b926-4822-83bc-13ff2ceb19ea-srv-cert\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940027 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aabe738d-7f14-493c-93bd-a92ec26d99a7-node-bootstrap-token\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940082 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdc98\" (UniqueName: \"kubernetes.io/projected/d677fccc-b7e9-4850-83af-2246938ebf69-kube-api-access-wdc98\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940102 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-registration-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940142 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-csi-data-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940167 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5ww\" (UniqueName: \"kubernetes.io/projected/134e44c3-9a68-4785-b29c-ed473a0febc6-kube-api-access-bh5ww\") pod \"ingress-canary-n4xzq\" (UID: \"134e44c3-9a68-4785-b29c-ed473a0febc6\") " pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940183 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-mountpoint-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940202 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-plugins-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940244 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d677fccc-b7e9-4850-83af-2246938ebf69-config-volume\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940265 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7m8b\" (UniqueName: \"kubernetes.io/projected/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-kube-api-access-v7m8b\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940299 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b4c680b-2a21-42d7-8362-9a240b75e000-signing-key\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940408 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b4c680b-2a21-42d7-8362-9a240b75e000-signing-cabundle\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940472 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bhs\" (UniqueName: \"kubernetes.io/projected/1b4c680b-2a21-42d7-8362-9a240b75e000-kube-api-access-s4bhs\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940493 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445d4b66-b926-4822-83bc-13ff2ceb19ea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940539 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/134e44c3-9a68-4785-b29c-ed473a0febc6-cert\") pod \"ingress-canary-n4xzq\" (UID: \"134e44c3-9a68-4785-b29c-ed473a0febc6\") " pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940558 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kl7f\" (UniqueName: \"kubernetes.io/projected/445d4b66-b926-4822-83bc-13ff2ceb19ea-kube-api-access-9kl7f\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.940612 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d677fccc-b7e9-4850-83af-2246938ebf69-metrics-tls\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: E0122 10:39:14.941354 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.441307802 +0000 UTC m=+148.099343912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.941489 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-socket-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.942107 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-registration-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.942244 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-csi-data-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.942774 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-mountpoint-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.942825 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-plugins-dir\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.943379 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d677fccc-b7e9-4850-83af-2246938ebf69-config-volume\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.944569 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b4c680b-2a21-42d7-8362-9a240b75e000-signing-cabundle\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.948423 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aabe738d-7f14-493c-93bd-a92ec26d99a7-certs\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.954777 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/134e44c3-9a68-4785-b29c-ed473a0febc6-cert\") pod \"ingress-canary-n4xzq\" (UID: \"134e44c3-9a68-4785-b29c-ed473a0febc6\") " pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.962948 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445d4b66-b926-4822-83bc-13ff2ceb19ea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.964007 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b4c680b-2a21-42d7-8362-9a240b75e000-signing-key\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.964267 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445d4b66-b926-4822-83bc-13ff2ceb19ea-srv-cert\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.966492 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg8v7\" (UniqueName: \"kubernetes.io/projected/ffde48e3-a9b2-4616-b37b-29450881e65c-kube-api-access-mg8v7\") pod \"openshift-apiserver-operator-796bbdcf4f-nng6l\" (UID: \"ffde48e3-a9b2-4616-b37b-29450881e65c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.967758 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d677fccc-b7e9-4850-83af-2246938ebf69-metrics-tls\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.967970 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19947433-0cc9-4d2c-87e0-77440921e949-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-csxrs\" (UID: \"19947433-0cc9-4d2c-87e0-77440921e949\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.975952 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aabe738d-7f14-493c-93bd-a92ec26d99a7-node-bootstrap-token\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.982437 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.989406 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:39:14 crc kubenswrapper[4975]: I0122 10:39:14.990758 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cj5v\" (UniqueName: \"kubernetes.io/projected/eaeb6317-904e-4fae-90f6-a8169f8f7b61-kube-api-access-6cj5v\") pod \"console-operator-58897d9998-z6j7l\" (UID: \"eaeb6317-904e-4fae-90f6-a8169f8f7b61\") " pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:14 crc kubenswrapper[4975]: W0122 10:39:14.991466 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda92ef0de_4810_4e00_8f34_90fcb13c236c.slice/crio-eada703cc76ff389d11b5c42bb7d2e10770e9c8cb38f1a2fc9bfd794522f2d47 WatchSource:0}: Error finding container eada703cc76ff389d11b5c42bb7d2e10770e9c8cb38f1a2fc9bfd794522f2d47: Status 404 returned error can't find the container with id eada703cc76ff389d11b5c42bb7d2e10770e9c8cb38f1a2fc9bfd794522f2d47 Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.006425 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-bound-sa-token\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.039352 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.041783 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.042135 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.542123676 +0000 UTC m=+148.200159786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.045301 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5ww\" (UniqueName: \"kubernetes.io/projected/134e44c3-9a68-4785-b29c-ed473a0febc6-kube-api-access-bh5ww\") pod \"ingress-canary-n4xzq\" (UID: \"134e44c3-9a68-4785-b29c-ed473a0febc6\") " pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.053055 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.097864 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdc98\" (UniqueName: \"kubernetes.io/projected/d677fccc-b7e9-4850-83af-2246938ebf69-kube-api-access-wdc98\") pod \"dns-default-n6dj7\" (UID: \"d677fccc-b7e9-4850-83af-2246938ebf69\") " pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.107189 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvmh\" (UniqueName: \"kubernetes.io/projected/aabe738d-7f14-493c-93bd-a92ec26d99a7-kube-api-access-rqvmh\") pod \"machine-config-server-hgtw2\" (UID: \"aabe738d-7f14-493c-93bd-a92ec26d99a7\") " pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.121958 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bhs\" (UniqueName: \"kubernetes.io/projected/1b4c680b-2a21-42d7-8362-9a240b75e000-kube-api-access-s4bhs\") pod \"service-ca-9c57cc56f-sk2bb\" (UID: \"1b4c680b-2a21-42d7-8362-9a240b75e000\") " pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.168826 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.169163 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.669148097 +0000 UTC m=+148.327184207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.172821 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7m8b\" (UniqueName: \"kubernetes.io/projected/4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2-kube-api-access-v7m8b\") pod \"csi-hostpathplugin-68t4d\" (UID: \"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2\") " pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.173545 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.177805 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.189177 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kl7f\" (UniqueName: \"kubernetes.io/projected/445d4b66-b926-4822-83bc-13ff2ceb19ea-kube-api-access-9kl7f\") pod \"olm-operator-6b444d44fb-nkhj2\" (UID: \"445d4b66-b926-4822-83bc-13ff2ceb19ea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.220641 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.225900 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.231618 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n4xzq" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.238515 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.268303 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.280269 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hgtw2" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.280792 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.281129 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.781118026 +0000 UTC m=+148.439154136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.386556 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.386847 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.886832997 +0000 UTC m=+148.544869107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.460407 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4"] Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.475135 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c629z"] Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.488319 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.488870 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:15.988859607 +0000 UTC m=+148.646895717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.511488 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4"] Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.591898 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592157 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592211 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592234 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.592272 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.092253237 +0000 UTC m=+148.750289347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592306 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592358 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592382 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592432 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.592638 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.593744 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.593892 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-etcd-serving-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.594176 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.094161835 +0000 UTC m=+148.752197945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.594180 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-audit\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.594270 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.594867 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-image-import-ca\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.646652 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" event={"ID":"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1","Type":"ContainerStarted","Data":"817b93c57a60d5f156c727417bbccdcd7dc4301dad6b1d6ffe84e1769ff8ef8b"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.647110 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-serving-cert\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.648701 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4d8cf2a-15db-4bcc-b80c-9e9a273fca82-encryption-config\") pod \"apiserver-76f77b778f-62jj5\" (UID: \"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82\") " pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.650624 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" event={"ID":"5a11604f-1c29-4f85-8008-2883eaba3233","Type":"ContainerStarted","Data":"2520152d551f95f413640cf80d52b49939f3883e8a1e01e5a358752a9a9b9af0"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.652059 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" event={"ID":"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815","Type":"ContainerStarted","Data":"0d7b3f4f2b8b4fca35be609770c4d5754f350d58626a86b0404bb04e0366bd25"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.656416 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" event={"ID":"af2b3377-4d92-4b44-bee1-338c9156c095","Type":"ContainerStarted","Data":"fbf05705a0ca0eac802890bf17e22e1bc617481d16336808c41cfe361e21a06a"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.656455 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" event={"ID":"af2b3377-4d92-4b44-bee1-338c9156c095","Type":"ContainerStarted","Data":"1db391c85917ba555b038501d7024597bd881b418ad8850a92fc7c8973859eea"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.681502 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" event={"ID":"e7447dea-7435-4f66-94d2-19d28357acad","Type":"ContainerStarted","Data":"070a47afac8beeb3ddef4c29d5346ff4a6c8d25204e5d9f9e6239cda40ca9a2b"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.693807 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.693982 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.193960898 +0000 UTC m=+148.851997008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.694229 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.694893 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" event={"ID":"3c451d4b-5258-4a31-a438-d93a650f354f","Type":"ContainerStarted","Data":"2f82ad6965c915c7c165992661590e2dde3d401351c642521e210836f3d6ff5e"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.695239 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.697525 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.197514524 +0000 UTC m=+148.855550634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.710157 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" event={"ID":"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e","Type":"ContainerStarted","Data":"8f95d4b75fd5a4ecaabe239b34326647230c715eb6bd8d4f5476b1e4a9548030"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.719598 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" event={"ID":"45531d0c-698f-4013-9b53-ccfb8f76268f","Type":"ContainerStarted","Data":"82acb5a3c077f4c7af4525dd80c9689d0e72e380943b7718bacadd646cbb1e20"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.738095 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" event={"ID":"5f4e900b-eda0-47d5-8c08-aca5c5468915","Type":"ContainerStarted","Data":"a723ef3777a0a4eebab737136c662fd8db686c4c44cc944f7389f56dae3a1333"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.745145 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" event={"ID":"9261771a-1df2-4fa4-a5e1-270602b5e2ad","Type":"ContainerStarted","Data":"840581a672548fc81fa2db6a5feb618307cdafb134a7cfb7890086daa8bfa354"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.745182 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" event={"ID":"9261771a-1df2-4fa4-a5e1-270602b5e2ad","Type":"ContainerStarted","Data":"02e9d0f28efb1f71221145be448e8bbdde70f7b55ae9d32cd0ba27419870cb4b"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.758297 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" event={"ID":"881a346a-95ca-4ef3-a824-b10d11a6b049","Type":"ContainerStarted","Data":"9236b3946348b38d94e2ee56ec79b599b0ac023e5ea0fbadbe4ed3c897790a44"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.800209 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2jxxh" event={"ID":"3ade24ae-bd68-4ce7-ae8c-de8f35a850d9","Type":"ContainerStarted","Data":"7df0394449519c66d8fd422cdbcb75606a0323e3fcfc2f5cdccd04e087e0fcd6"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.801427 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.809388 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" event={"ID":"20e6fd86-f979-409b-9577-6ce3a00ff73d","Type":"ContainerStarted","Data":"d43b83c4c79c000becbe9db8b21ea0c9f777226a6fe22783c94ba205edab3c8c"} Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.810212 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.310192474 +0000 UTC m=+148.968228584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.810923 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.811356 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.31134606 +0000 UTC m=+148.969382170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.815909 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7pt74" event={"ID":"a92ef0de-4810-4e00-8f34-90fcb13c236c","Type":"ContainerStarted","Data":"eada703cc76ff389d11b5c42bb7d2e10770e9c8cb38f1a2fc9bfd794522f2d47"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.829432 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" event={"ID":"a85dd0b4-d50c-433c-a510-238a0f9a22bb","Type":"ContainerStarted","Data":"49e753f9eafc41ffa79a5995e5efb45b5d26133060bc2af2d83b52df2e5e0e06"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.888300 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" event={"ID":"4e7efaac-9ce4-4487-9d6b-cd97ab115921","Type":"ContainerStarted","Data":"60c32d1e9ee641f6b3c0e57c9c29f601eb069898274c56ffe42694814212ce24"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.890461 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.892699 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42cdr"] Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.913053 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:15 crc kubenswrapper[4975]: E0122 10:39:15.914222 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.414203114 +0000 UTC m=+149.072239224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.922778 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9"] Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.941095 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ptf2x" event={"ID":"62a0f3fd-9bc0-4505-999b-2cc156ebd430","Type":"ContainerStarted","Data":"db0e2109bfff271d68511f0946724afb9792fa4c21f1817d21e94b615aeba07c"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.941271 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.941288 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ptf2x" event={"ID":"62a0f3fd-9bc0-4505-999b-2cc156ebd430","Type":"ContainerStarted","Data":"29b4e609a6a2dfb898b4e243ac238f5c4a788538406df331365eff1dcf309399"} Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.941388 4975 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptf2x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.941415 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptf2x" podUID="62a0f3fd-9bc0-4505-999b-2cc156ebd430" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 10:39:15 crc kubenswrapper[4975]: W0122 10:39:15.963769 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbefe9235_f8a1_4fda_ace8_2752310375e3.slice/crio-f3086130bef4aeece36bbf0e271fd1586771ec67c5b3e82652709a01ce67e1ca WatchSource:0}: Error finding container f3086130bef4aeece36bbf0e271fd1586771ec67c5b3e82652709a01ce67e1ca: Status 404 returned error can't find the container with id f3086130bef4aeece36bbf0e271fd1586771ec67c5b3e82652709a01ce67e1ca Jan 22 10:39:15 crc kubenswrapper[4975]: I0122 10:39:15.964254 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.014047 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.014881 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d"] Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.015236 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.515225353 +0000 UTC m=+149.173261463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.044536 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.054063 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.082957 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.113416 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.115607 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.115885 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.615866211 +0000 UTC m=+149.273902321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.134816 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dfvnx"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.189792 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdlsh"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.211032 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" podStartSLOduration=126.211003794 podStartE2EDuration="2m6.211003794s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:16.210594332 +0000 UTC m=+148.868630462" watchObservedRunningTime="2026-01-22 10:39:16.211003794 +0000 UTC m=+148.869039914" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.222218 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.223170 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.223480 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.723470089 +0000 UTC m=+149.381506189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.229157 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.230616 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs"] Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.324065 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.324624 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.824601232 +0000 UTC m=+149.482637352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: W0122 10:39:16.345524 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecefe468_0f5a_4b37_99e2_a09d6f59f7fd.slice/crio-3029f0c8aa7b80f2fe258fab9ec0347e6098123834509d61c56a76e0dbf4b37b WatchSource:0}: Error finding container 3029f0c8aa7b80f2fe258fab9ec0347e6098123834509d61c56a76e0dbf4b37b: Status 404 returned error can't find the container with id 3029f0c8aa7b80f2fe258fab9ec0347e6098123834509d61c56a76e0dbf4b37b Jan 22 10:39:16 crc kubenswrapper[4975]: W0122 10:39:16.373639 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81fdabfd_7410_4ca7_855b_c1764b58b808.slice/crio-e7ab00ec4093a3429e3cafc6fcc4a412a76a3730f3239656a2795f3bb902ecba WatchSource:0}: Error finding container e7ab00ec4093a3429e3cafc6fcc4a412a76a3730f3239656a2795f3bb902ecba: Status 404 returned error can't find the container with id e7ab00ec4093a3429e3cafc6fcc4a412a76a3730f3239656a2795f3bb902ecba Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.426891 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.437245 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.437728 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:16.937711525 +0000 UTC m=+149.595747635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.539926 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.540574 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.040558169 +0000 UTC m=+149.698594279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.619041 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ptf2x" podStartSLOduration=126.61901819 podStartE2EDuration="2m6.61901819s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:16.617736181 +0000 UTC m=+149.275772291" watchObservedRunningTime="2026-01-22 10:39:16.61901819 +0000 UTC m=+149.277054300" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.640506 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l6m7w" podStartSLOduration=125.640489096 podStartE2EDuration="2m5.640489096s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:16.639538767 +0000 UTC m=+149.297574877" watchObservedRunningTime="2026-01-22 10:39:16.640489096 +0000 UTC m=+149.298525206" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.642392 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.642672 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.142660411 +0000 UTC m=+149.800696521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.740214 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2jxxh" podStartSLOduration=125.740199746 podStartE2EDuration="2m5.740199746s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:16.735213596 +0000 UTC m=+149.393249706" watchObservedRunningTime="2026-01-22 10:39:16.740199746 +0000 UTC m=+149.398235856" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.744807 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.745280 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.245262678 +0000 UTC m=+149.903298788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.772170 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ff77j" podStartSLOduration=125.772153478 podStartE2EDuration="2m5.772153478s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:16.770377584 +0000 UTC m=+149.428413694" watchObservedRunningTime="2026-01-22 10:39:16.772153478 +0000 UTC m=+149.430189588" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.847039 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.847352 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.347319739 +0000 UTC m=+150.005355849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.943514 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:16 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:16 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:16 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.943558 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.951405 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.953423 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.45339684 +0000 UTC m=+150.111432950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:16 crc kubenswrapper[4975]: I0122 10:39:16.954219 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:16 crc kubenswrapper[4975]: E0122 10:39:16.954574 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.454562406 +0000 UTC m=+150.112598516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:16.997837 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" event={"ID":"c6c11e90-4ee6-4534-ae9b-8e62ec35d08e","Type":"ContainerStarted","Data":"f672273c9b339adf8a5d51b800925ce59451dec29a360fde6b37c07ea6fe6ea8"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.014357 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hgtw2" event={"ID":"aabe738d-7f14-493c-93bd-a92ec26d99a7","Type":"ContainerStarted","Data":"e35e25fd9bf1661f8a75ed445104eebcc75667f245b4949e1facdc0c94a3bd27"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.014397 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hgtw2" event={"ID":"aabe738d-7f14-493c-93bd-a92ec26d99a7","Type":"ContainerStarted","Data":"90f65d728d19a9e0bce91180389a94e46c041e5ae8681dc7ed366c0f132bc722"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.040400 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" event={"ID":"81fdabfd-7410-4ca7-855b-c1764b58b808","Type":"ContainerStarted","Data":"e7ab00ec4093a3429e3cafc6fcc4a412a76a3730f3239656a2795f3bb902ecba"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.057024 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z5l8p" podStartSLOduration=126.057008918 podStartE2EDuration="2m6.057008918s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.032138499 +0000 UTC m=+149.690174609" watchObservedRunningTime="2026-01-22 10:39:17.057008918 +0000 UTC m=+149.715045028" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.057212 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hgtw2" podStartSLOduration=6.057208314 podStartE2EDuration="6.057208314s" podCreationTimestamp="2026-01-22 10:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.055977697 +0000 UTC m=+149.714013807" watchObservedRunningTime="2026-01-22 10:39:17.057208314 +0000 UTC m=+149.715244424" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.057901 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7pt74" event={"ID":"a92ef0de-4810-4e00-8f34-90fcb13c236c","Type":"ContainerStarted","Data":"8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.058989 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.059286 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.559276036 +0000 UTC m=+150.217312136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.071298 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" event={"ID":"20ca2f64-d03d-4d54-94b4-af04fcb499ce","Type":"ContainerStarted","Data":"31df964528d01dd3f6b7e69b0ff548491e5563ead00a4e890094641e8506040a"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.072018 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" event={"ID":"48c3f95e-78eb-4e10-bd32-4c4644190f01","Type":"ContainerStarted","Data":"ee0df0bdfe719c54c1c48f65530502d4a3e883a32e51461e5b9d832d9afc00d6"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.076896 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" event={"ID":"af2b3377-4d92-4b44-bee1-338c9156c095","Type":"ContainerStarted","Data":"f2c368bea321b331a407789cd71d8af011c4d5276178cf7c450c0c68e3df27c4"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.083594 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7pt74" podStartSLOduration=127.083576828 podStartE2EDuration="2m7.083576828s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.081925357 +0000 UTC m=+149.739961467" watchObservedRunningTime="2026-01-22 10:39:17.083576828 +0000 UTC m=+149.741612938" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.092408 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"422b3dc313e2fe53bc752289fc7fe3751d088cb1cc2dc0403f47e84e4be09bfc"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.132324 4975 generic.go:334] "Generic (PLEG): container finished" podID="56ade85a-531d-4cc3-bb1c-9e8d6fc6a815" containerID="00f37d7897e405b32c92dba01a08603f6d36cbecf0b16c2a2e114ecbdab17a5e" exitCode=0 Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.132459 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" event={"ID":"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815","Type":"ContainerDied","Data":"00f37d7897e405b32c92dba01a08603f6d36cbecf0b16c2a2e114ecbdab17a5e"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.161276 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-prtg7" podStartSLOduration=126.161259645 podStartE2EDuration="2m6.161259645s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.122825028 +0000 UTC m=+149.780861138" watchObservedRunningTime="2026-01-22 10:39:17.161259645 +0000 UTC m=+149.819295755" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.162696 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.164239 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.664227464 +0000 UTC m=+150.322263574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.164597 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.209007 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" event={"ID":"3c451d4b-5258-4a31-a438-d93a650f354f","Type":"ContainerStarted","Data":"29b43b5dc552d0198cc6e6590a2d90f394979a61d32f351dbcae2d18ec00a305"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.248477 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sk2bb"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.249569 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" event={"ID":"5a11604f-1c29-4f85-8008-2883eaba3233","Type":"ContainerStarted","Data":"f055444f378dd561a052e6e7104f8f330362d86151ffabe59c13d414290b892f"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.253985 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z6j7l"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.264100 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.264916 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.764900032 +0000 UTC m=+150.422936142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.281066 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-h2jwh" podStartSLOduration=127.281046168 podStartE2EDuration="2m7.281046168s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.250051886 +0000 UTC m=+149.908087996" watchObservedRunningTime="2026-01-22 10:39:17.281046168 +0000 UTC m=+149.939082278" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.290410 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" event={"ID":"4fd3bde3-2279-48fc-9962-33f16d29dab2","Type":"ContainerStarted","Data":"e0eb934a7e83922424422599b4e3edfd36bb0146b5850b395a3cfb6e2c6d141e"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.300761 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" event={"ID":"e7447dea-7435-4f66-94d2-19d28357acad","Type":"ContainerStarted","Data":"4b60e4d0114c7f8552a5a0803c44ef11c5224380dd25d27164283f2fce440102"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.306106 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" event={"ID":"19947433-0cc9-4d2c-87e0-77440921e949","Type":"ContainerStarted","Data":"4f90ce2773c21b12be7fbaaaf0eda720feadc07622292e35147240f35d849fb4"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.316641 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" event={"ID":"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2","Type":"ContainerStarted","Data":"edd8bd1cc24f318767b1189fbe263dea4e6c6e8fb997dfba8f56d3d93aad7457"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.322625 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" podStartSLOduration=127.322606009 podStartE2EDuration="2m7.322606009s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.321926919 +0000 UTC m=+149.979963029" watchObservedRunningTime="2026-01-22 10:39:17.322606009 +0000 UTC m=+149.980642119" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.362134 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" event={"ID":"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1","Type":"ContainerStarted","Data":"eac5b556ef5290448eb38573622ff62bb0764462d7f2bccfb9fec63513557235"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.368025 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" event={"ID":"88f51483-70e8-478e-8082-77a1372f2e8f","Type":"ContainerStarted","Data":"e8e0bc09acd190c0a61b363e920e7235ecd705e4cbc69117f1b7fe4289a86f7b"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.368065 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" event={"ID":"88f51483-70e8-478e-8082-77a1372f2e8f","Type":"ContainerStarted","Data":"c1b42a7b927b8e2b6a94b35c228014bc9063bc6766054042fe932aa700aa8768"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.370736 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" event={"ID":"45531d0c-698f-4013-9b53-ccfb8f76268f","Type":"ContainerStarted","Data":"5d727a8fd14586ab809f33e1f27ac9f11aa16e9ae74f9436d7e07656a6a80683"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.371001 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.372009 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.871996665 +0000 UTC m=+150.530032775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.387921 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" event={"ID":"20e6fd86-f979-409b-9577-6ce3a00ff73d","Type":"ContainerStarted","Data":"a3b160155690ed814c14af8a7bf354dabb1e670090a3ec3f76c86419c439f5dd"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.388633 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.402718 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxzg4" podStartSLOduration=127.402699479 podStartE2EDuration="2m7.402699479s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.401781911 +0000 UTC m=+150.059818021" watchObservedRunningTime="2026-01-22 10:39:17.402699479 +0000 UTC m=+150.060735589" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.426736 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" event={"ID":"a85dd0b4-d50c-433c-a510-238a0f9a22bb","Type":"ContainerStarted","Data":"008c331084c44a579d460dff43b6789d879837c39a3b1884c3c84948468218f6"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.435819 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.444870 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:17 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:17 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:17 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.445045 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.462681 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zc7v2" podStartSLOduration=126.462667553 podStartE2EDuration="2m6.462667553s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.4619055 +0000 UTC m=+150.119941610" watchObservedRunningTime="2026-01-22 10:39:17.462667553 +0000 UTC m=+150.120703653" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.466211 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" event={"ID":"befe9235-f8a1-4fda-ace8-2752310375e3","Type":"ContainerStarted","Data":"f3086130bef4aeece36bbf0e271fd1586771ec67c5b3e82652709a01ce67e1ca"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.475005 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.475134 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.975104458 +0000 UTC m=+150.633140568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.475342 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.477826 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:17.977816619 +0000 UTC m=+150.635852729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.503845 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n4xzq"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.509688 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-68t4d"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.510094 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" event={"ID":"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd","Type":"ContainerStarted","Data":"3029f0c8aa7b80f2fe258fab9ec0347e6098123834509d61c56a76e0dbf4b37b"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.515219 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" event={"ID":"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f","Type":"ContainerStarted","Data":"0b84f74f42b51ca8f6273fe94f78249845eda8bbd59f668dc18105c8a0636e87"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.521036 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" podStartSLOduration=127.521021559 podStartE2EDuration="2m7.521021559s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.519921686 +0000 UTC m=+150.177957796" watchObservedRunningTime="2026-01-22 10:39:17.521021559 +0000 UTC m=+150.179057669" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.524861 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" event={"ID":"5f4e900b-eda0-47d5-8c08-aca5c5468915","Type":"ContainerStarted","Data":"57b83b4bd72a1f032d31834107e0333d95dc6ed7f9d2ff4f335e703ddfd566e8"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.544728 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" event={"ID":"881a346a-95ca-4ef3-a824-b10d11a6b049","Type":"ContainerStarted","Data":"5724d7a916793e93b79691511683fe66b3b7def32e65daa7a6810db7b9e18241"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.575594 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" event={"ID":"afc0a96d-0893-4c54-a6df-52f9f757f72e","Type":"ContainerStarted","Data":"3eb3b6c23ff881f2a40b0d7d11063ee7510c6603df8e6b30da55355248d8fa95"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.578136 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.578942 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.078927132 +0000 UTC m=+150.736963242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.607947 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.608482 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" podStartSLOduration=126.60846614 podStartE2EDuration="2m6.60846614s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:17.604081708 +0000 UTC m=+150.262117828" watchObservedRunningTime="2026-01-22 10:39:17.60846614 +0000 UTC m=+150.266502250" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.610606 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" event={"ID":"20dff28e-fc21-42b9-9c2a-1ee8588ed83e","Type":"ContainerStarted","Data":"bbda8ac4037fad87b5bf8be7d27620ab0b6a40b1d52a886ff3ed12859a03cce6"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.650776 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n6dj7"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.654253 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" event={"ID":"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16","Type":"ContainerStarted","Data":"5c15b0241e53080ad5a87d399e4e6ccd28cd4545acc8c42cb59724526bd847d9"} Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.655586 4975 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptf2x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.655622 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptf2x" podUID="62a0f3fd-9bc0-4505-999b-2cc156ebd430" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.679055 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.687889 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.690061 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.190045554 +0000 UTC m=+150.848081664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.733211 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-62jj5"] Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.798800 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.800107 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.300089236 +0000 UTC m=+150.958125346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:17 crc kubenswrapper[4975]: W0122 10:39:17.823815 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd677fccc_b7e9_4850_83af_2246938ebf69.slice/crio-fa5e20a0b5c5df02260c15a350722017d0972bde0ba6c73b87863bd48283d42b WatchSource:0}: Error finding container fa5e20a0b5c5df02260c15a350722017d0972bde0ba6c73b87863bd48283d42b: Status 404 returned error can't find the container with id fa5e20a0b5c5df02260c15a350722017d0972bde0ba6c73b87863bd48283d42b Jan 22 10:39:17 crc kubenswrapper[4975]: I0122 10:39:17.907369 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:17 crc kubenswrapper[4975]: E0122 10:39:17.910433 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.410402734 +0000 UTC m=+151.068438844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.008374 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.008513 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.508490746 +0000 UTC m=+151.166526856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.008855 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.009150 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.509138485 +0000 UTC m=+151.167174585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.109503 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.110166 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.610143324 +0000 UTC m=+151.268179434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.110298 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.110668 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.610652519 +0000 UTC m=+151.268688629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.215201 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.216034 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.716009589 +0000 UTC m=+151.374045699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.319988 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.320391 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.820373529 +0000 UTC m=+151.478409639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.438652 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.439410 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:18.93939451 +0000 UTC m=+151.597430620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.442517 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:18 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:18 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:18 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.442553 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.541143 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.541708 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.041692008 +0000 UTC m=+151.699728118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.642583 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.642879 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.142854101 +0000 UTC m=+151.800890211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.671197 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" event={"ID":"4fd3bde3-2279-48fc-9962-33f16d29dab2","Type":"ContainerStarted","Data":"3afe0d2fd6f11b39812af0d430cccf3341c02348ed320051dbabaa6c099485d1"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.696772 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lbqkh" podStartSLOduration=127.696755874 podStartE2EDuration="2m7.696755874s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.695495015 +0000 UTC m=+151.353531125" watchObservedRunningTime="2026-01-22 10:39:18.696755874 +0000 UTC m=+151.354791984" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.698628 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" event={"ID":"5f4e900b-eda0-47d5-8c08-aca5c5468915","Type":"ContainerStarted","Data":"489b4c2aaba86348c17592a6537d37cae8e1f09f8a9c4768c8d4bbcd9428a608"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.728661 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n4xzq" event={"ID":"134e44c3-9a68-4785-b29c-ed473a0febc6","Type":"ContainerStarted","Data":"e6449a413e2704a9c56a22c4e21a2a1374b8097221346de5088619fd7829dd1e"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.728706 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n4xzq" event={"ID":"134e44c3-9a68-4785-b29c-ed473a0febc6","Type":"ContainerStarted","Data":"2b2531dee45962b3975d944d75224ba08a70ba13d190b8b4ecbea63364c2f168"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.734343 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" event={"ID":"eaeb6317-904e-4fae-90f6-a8169f8f7b61","Type":"ContainerStarted","Data":"8c14dad54c3a050829f8945e511f67648b133ab061e0acc8682f64e1941d1990"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.734375 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" event={"ID":"eaeb6317-904e-4fae-90f6-a8169f8f7b61","Type":"ContainerStarted","Data":"27225c0931a5a01814466800a79c1273dead8c4c9d6561b6a9349b6766ed0ca5"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.735543 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" event={"ID":"ffde48e3-a9b2-4616-b37b-29450881e65c","Type":"ContainerStarted","Data":"bbd8877257185581328a34f3f60a33f5c689ba5ce1f1393cec8b51be72818e18"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.735589 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" event={"ID":"ffde48e3-a9b2-4616-b37b-29450881e65c","Type":"ContainerStarted","Data":"6b537c5182264dd8d36c5fbf3fba0416e4de99ed4a527f742202fc3163ba7713"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.739419 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.741430 4975 patch_prober.go:28] interesting pod/console-operator-58897d9998-z6j7l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.741475 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" podUID="eaeb6317-904e-4fae-90f6-a8169f8f7b61" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.750284 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n4xzq" podStartSLOduration=7.750268644 podStartE2EDuration="7.750268644s" podCreationTimestamp="2026-01-22 10:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.749760999 +0000 UTC m=+151.407797109" watchObservedRunningTime="2026-01-22 10:39:18.750268644 +0000 UTC m=+151.408304754" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.751103 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.752101 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.252089898 +0000 UTC m=+151.910126008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.752384 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6p98b" podStartSLOduration=128.752378476 podStartE2EDuration="2m8.752378476s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.716519578 +0000 UTC m=+151.374555688" watchObservedRunningTime="2026-01-22 10:39:18.752378476 +0000 UTC m=+151.410414586" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.752881 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" event={"ID":"afc0a96d-0893-4c54-a6df-52f9f757f72e","Type":"ContainerStarted","Data":"5d29f7796f9851b3dbdb3e518b584c760b51f2b0bee0fddb0174ce13c6ff62fe"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.765891 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" event={"ID":"56ade85a-531d-4cc3-bb1c-9e8d6fc6a815","Type":"ContainerStarted","Data":"0fe93b1545d906212fa4e6a98b7bf54e799336f0fefc003390243874148839b7"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.777707 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" event={"ID":"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82","Type":"ContainerStarted","Data":"93c4713a750fcf7bce36e5deb375ec6995839a782d062a9ab043670c017da57a"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.777745 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" event={"ID":"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82","Type":"ContainerStarted","Data":"8c044c8fe55b3b4fc8d8f7828c04eeb2a7ca01e9a4c2a8ef96afc0aae447f536"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.785788 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" event={"ID":"445d4b66-b926-4822-83bc-13ff2ceb19ea","Type":"ContainerStarted","Data":"fe00b589dfbe966d0ad7c4229ea1239d66d1c5650bdcd583ef5a190ba9dd4a29"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.785818 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" event={"ID":"445d4b66-b926-4822-83bc-13ff2ceb19ea","Type":"ContainerStarted","Data":"680c1512da332ca6239ed71af32d63e1a459d025f2fe7c2204b9c7c0b13ff038"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.787316 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.787389 4975 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nkhj2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.787414 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" podUID="445d4b66-b926-4822-83bc-13ff2ceb19ea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.788239 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" podStartSLOduration=128.788222355 podStartE2EDuration="2m8.788222355s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.786014998 +0000 UTC m=+151.444051108" watchObservedRunningTime="2026-01-22 10:39:18.788222355 +0000 UTC m=+151.446258465" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.795576 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d8cf2a_15db_4bcc_b80c_9e9a273fca82.slice/crio-conmon-93c4713a750fcf7bce36e5deb375ec6995839a782d062a9ab043670c017da57a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d8cf2a_15db_4bcc_b80c_9e9a273fca82.slice/crio-93c4713a750fcf7bce36e5deb375ec6995839a782d062a9ab043670c017da57a.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.807294 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" event={"ID":"19947433-0cc9-4d2c-87e0-77440921e949","Type":"ContainerStarted","Data":"29e075a1db340e4647e14505a57b4bc74369355fba0f2349a03117b1732e70da"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.820707 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nng6l" podStartSLOduration=128.820682812 podStartE2EDuration="2m8.820682812s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.814526737 +0000 UTC m=+151.472562847" watchObservedRunningTime="2026-01-22 10:39:18.820682812 +0000 UTC m=+151.478718922" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.828020 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" event={"ID":"81fdabfd-7410-4ca7-855b-c1764b58b808","Type":"ContainerStarted","Data":"3e6dbcc74f3cd802ae759a53a0805606d483b01b1e8bdea3ec3f7529c9411dbe"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.837325 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" event={"ID":"1b4c680b-2a21-42d7-8362-9a240b75e000","Type":"ContainerStarted","Data":"cf37a7e18e8c5d4ab23ed066009a18ccbab548c918a09d1cf3b99ec04720c497"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.837384 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" event={"ID":"1b4c680b-2a21-42d7-8362-9a240b75e000","Type":"ContainerStarted","Data":"f7eae30d80154e55e7df651c3aa627838b028c88d03b55a567e6a5efc685db8d"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.854299 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.857663 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.357619523 +0000 UTC m=+152.015655623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.858982 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" event={"ID":"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2","Type":"ContainerStarted","Data":"e736a39f093742b2cd88750cf3f602b41be0c7573beee3f7b9acb5f0bc567f6f"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.859032 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" event={"ID":"7e9b0a90-2ef1-4fc2-bba8-fc1bc322dda2","Type":"ContainerStarted","Data":"c9a4973b631827752b2eadd08219ded06164f48683f363559cb59344e8c1c081"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.876233 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" podStartSLOduration=127.876210233 podStartE2EDuration="2m7.876210233s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.841787277 +0000 UTC m=+151.499823387" watchObservedRunningTime="2026-01-22 10:39:18.876210233 +0000 UTC m=+151.534246353" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.909619 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" event={"ID":"5a11604f-1c29-4f85-8008-2883eaba3233","Type":"ContainerStarted","Data":"8bfbf8003233d28e8ce884fda0be8a4ee7a9ff2c12640f102ab565625978b636"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.910244 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.912376 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" podStartSLOduration=127.912354681 podStartE2EDuration="2m7.912354681s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.910535555 +0000 UTC m=+151.568571665" watchObservedRunningTime="2026-01-22 10:39:18.912354681 +0000 UTC m=+151.570390791" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.922399 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.922435 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" event={"ID":"20dff28e-fc21-42b9-9c2a-1ee8588ed83e","Type":"ContainerStarted","Data":"c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.922452 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" event={"ID":"f6141abb-1a67-4419-b0a5-4796b4af2794","Type":"ContainerStarted","Data":"b2adcb42be11e3590b7f3d28b65ecbd55b6f4058a88a199361a2e9ddd1327792"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.922463 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" event={"ID":"f6141abb-1a67-4419-b0a5-4796b4af2794","Type":"ContainerStarted","Data":"59cf96ad32a2ce1fdd3a2bf46bdb9420645aeb8eca4fd4aee98bbcacc7fb303d"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.936755 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.938318 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dfvnx" podStartSLOduration=128.938306121 podStartE2EDuration="2m8.938306121s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.937047963 +0000 UTC m=+151.595084073" watchObservedRunningTime="2026-01-22 10:39:18.938306121 +0000 UTC m=+151.596342231" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.944173 4975 generic.go:334] "Generic (PLEG): container finished" podID="befe9235-f8a1-4fda-ace8-2752310375e3" containerID="0f1e24c0b50f41bfde150cc3717776606fff54e7310cca2c21963fac06c4b9a4" exitCode=0 Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.944247 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" event={"ID":"befe9235-f8a1-4fda-ace8-2752310375e3","Type":"ContainerStarted","Data":"b3b95b2376187f8f0d99c22465dd80af089fa0f8b285d29580b4855039a429d3"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.944268 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" event={"ID":"befe9235-f8a1-4fda-ace8-2752310375e3","Type":"ContainerDied","Data":"0f1e24c0b50f41bfde150cc3717776606fff54e7310cca2c21963fac06c4b9a4"} Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.944843 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.959525 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:18 crc kubenswrapper[4975]: E0122 10:39:18.960091 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.460077946 +0000 UTC m=+152.118114126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:18 crc kubenswrapper[4975]: I0122 10:39:18.979644 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-csxrs" podStartSLOduration=128.979629155 podStartE2EDuration="2m8.979629155s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:18.979082797 +0000 UTC m=+151.637118907" watchObservedRunningTime="2026-01-22 10:39:18.979629155 +0000 UTC m=+151.637665265" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.009748 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" podStartSLOduration=128.00973488 podStartE2EDuration="2m8.00973488s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.007347358 +0000 UTC m=+151.665383458" watchObservedRunningTime="2026-01-22 10:39:19.00973488 +0000 UTC m=+151.667770990" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.020663 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" event={"ID":"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16","Type":"ContainerStarted","Data":"22be9eebc73aa8c14916e5f8a715e5e40fda5d35cab5697076874fa8f36a56d2"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.020708 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" event={"ID":"b6c90c94-832e-4f36-9b9b-a2bb8c0f7b16","Type":"ContainerStarted","Data":"c5031d469d34c7bc7fc72b26088ecbed9d86fa52a51c35d9740db96da0fbdd3f"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.043154 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" event={"ID":"20ca2f64-d03d-4d54-94b4-af04fcb499ce","Type":"ContainerStarted","Data":"8cadef38d2a5f1a801acda9ffe4652ef52f1c3c69545160b3b9b2b28ad5fcae6"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.043212 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" event={"ID":"20ca2f64-d03d-4d54-94b4-af04fcb499ce","Type":"ContainerStarted","Data":"5e82aa7a2ed80cabd42b726ba6f2adc7e6072d6fcde4798cc0b8f73fc615fcf7"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.050644 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" podStartSLOduration=128.050630061 podStartE2EDuration="2m8.050630061s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.031071622 +0000 UTC m=+151.689107732" watchObservedRunningTime="2026-01-22 10:39:19.050630061 +0000 UTC m=+151.708666171" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.060236 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.062362 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"95831871edf3e9039e2b8c9c0b1ffa4f4c87ca03350aeb7a103b42f840b5b8a0"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.062396 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d2d96d1363e6410f1773ed4bf1b8708e93302ed2f4f04c151ac8a65627af9e5b"} Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.062916 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.562900929 +0000 UTC m=+152.220937039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.084218 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" podStartSLOduration=129.084200341 podStartE2EDuration="2m9.084200341s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.051125305 +0000 UTC m=+151.709161415" watchObservedRunningTime="2026-01-22 10:39:19.084200341 +0000 UTC m=+151.742236461" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.104520 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n6dj7" event={"ID":"d677fccc-b7e9-4850-83af-2246938ebf69","Type":"ContainerStarted","Data":"7ac44956566528048c4129a682390296a8133c88dbe3cf400838ea930415521b"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.104568 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n6dj7" event={"ID":"d677fccc-b7e9-4850-83af-2246938ebf69","Type":"ContainerStarted","Data":"fa5e20a0b5c5df02260c15a350722017d0972bde0ba6c73b87863bd48283d42b"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.124028 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2a429599486d2560dcfc2a744a74dd78a75eb8fde985d58ec05643fba14b7ced"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.138843 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qqvqj" podStartSLOduration=128.138827014 podStartE2EDuration="2m8.138827014s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.082895051 +0000 UTC m=+151.740931161" watchObservedRunningTime="2026-01-22 10:39:19.138827014 +0000 UTC m=+151.796863124" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.138888 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzh7w" event={"ID":"e7447dea-7435-4f66-94d2-19d28357acad","Type":"ContainerStarted","Data":"b46c215426e69062809e4c776b6020fee36d8972c4e9148adb8c9e162b028613"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.156166 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" event={"ID":"a85dd0b4-d50c-433c-a510-238a0f9a22bb","Type":"ContainerStarted","Data":"b8da5a7a38b5cfb2e0f0f1d82b159a0a73e5a88dad265b299099ae3d9e4e4e04"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.162122 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.162508 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.662494326 +0000 UTC m=+152.320530436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.168812 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" podStartSLOduration=129.168795816 podStartE2EDuration="2m9.168795816s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.138051731 +0000 UTC m=+151.796087861" watchObservedRunningTime="2026-01-22 10:39:19.168795816 +0000 UTC m=+151.826831916" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.169976 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v7hww" podStartSLOduration=129.169971022 podStartE2EDuration="2m9.169971022s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.168072904 +0000 UTC m=+151.826109014" watchObservedRunningTime="2026-01-22 10:39:19.169971022 +0000 UTC m=+151.828007132" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.179377 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" event={"ID":"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2","Type":"ContainerStarted","Data":"f63ba7a2ccb9091c74a3474b6c02bc3d8d16fa63125163494a7f666b7360e43f"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.191803 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-sk2bb" podStartSLOduration=128.191786507 podStartE2EDuration="2m8.191786507s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.187708615 +0000 UTC m=+151.845744735" watchObservedRunningTime="2026-01-22 10:39:19.191786507 +0000 UTC m=+151.849822617" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.198051 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" event={"ID":"ecefe468-0f5a-4b37-99e2-a09d6f59f7fd","Type":"ContainerStarted","Data":"00924fe77c70b5d0acce66b2129783565435bd8991660f11e87041ef57f97cb9"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.199226 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.214938 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1e466be7a43b26e916601a145b5986ea2ddc99e09194c8d72740ace4d48d4edc"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.214979 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a47625ff162c58d61c80524b0c2e11a158622b83d0e9a5bc71c9dbe253e42215"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.215447 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.238593 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" event={"ID":"cee65957-76bb-4ea4-bcf2-e9d85eb2ebd1","Type":"ContainerStarted","Data":"0fb73188495218117be2a51d7f65bb058e7c57fb19d3326f1375c09f12191f12"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.251262 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.252570 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cbp9" podStartSLOduration=128.252560756 podStartE2EDuration="2m8.252560756s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.251012409 +0000 UTC m=+151.909048519" watchObservedRunningTime="2026-01-22 10:39:19.252560756 +0000 UTC m=+151.910596866" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.260660 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l942m" event={"ID":"881a346a-95ca-4ef3-a824-b10d11a6b049","Type":"ContainerStarted","Data":"9036a8c5388918626519de4481ba3b0e887791af45485a29a90d79b7f0b9b5e9"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.269854 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.270943 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.770923329 +0000 UTC m=+152.428959439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.275527 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" event={"ID":"48c3f95e-78eb-4e10-bd32-4c4644190f01","Type":"ContainerStarted","Data":"c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.276154 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.288284 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.288441 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.299541 4975 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gdlsh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.299599 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.307419 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" event={"ID":"5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f","Type":"ContainerStarted","Data":"2ee41b475f11bb01a0888ea077911910ec641d1a64e2532bee228318bbf7bd9c"} Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.346611 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b9kn8" podStartSLOduration=129.346594996 podStartE2EDuration="2m9.346594996s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.309122388 +0000 UTC m=+151.967158488" watchObservedRunningTime="2026-01-22 10:39:19.346594996 +0000 UTC m=+152.004631106" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.371926 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.380072 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.880057303 +0000 UTC m=+152.538093413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.387861 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f246t" podStartSLOduration=128.387844026 podStartE2EDuration="2m8.387844026s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.348908065 +0000 UTC m=+152.006944175" watchObservedRunningTime="2026-01-22 10:39:19.387844026 +0000 UTC m=+152.045880136" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.423093 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d6l9z" podStartSLOduration=128.423077167 podStartE2EDuration="2m8.423077167s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.421712265 +0000 UTC m=+152.079748375" watchObservedRunningTime="2026-01-22 10:39:19.423077167 +0000 UTC m=+152.081113277" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.446897 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:19 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:19 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:19 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.446947 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.460209 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" podStartSLOduration=128.460188033 podStartE2EDuration="2m8.460188033s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.457401609 +0000 UTC m=+152.115437719" watchObservedRunningTime="2026-01-22 10:39:19.460188033 +0000 UTC m=+152.118224143" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.476775 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.477077 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:19.977062061 +0000 UTC m=+152.635098171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.486475 4975 csr.go:261] certificate signing request csr-jsv59 is approved, waiting to be issued Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.531859 4975 csr.go:257] certificate signing request csr-jsv59 is issued Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.579290 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.579617 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.079606746 +0000 UTC m=+152.737642856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.596876 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c629z" podStartSLOduration=128.596851395 podStartE2EDuration="2m8.596851395s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.595694301 +0000 UTC m=+152.253730411" watchObservedRunningTime="2026-01-22 10:39:19.596851395 +0000 UTC m=+152.254887505" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.598261 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qw87d" podStartSLOduration=128.598256207 podStartE2EDuration="2m8.598256207s" podCreationTimestamp="2026-01-22 10:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:19.53918985 +0000 UTC m=+152.197225960" watchObservedRunningTime="2026-01-22 10:39:19.598256207 +0000 UTC m=+152.256292317" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.680895 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.681397 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.181382468 +0000 UTC m=+152.839418578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.783953 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.784349 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.284317675 +0000 UTC m=+152.942353785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.884806 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.884993 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.384964904 +0000 UTC m=+153.043001014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.885055 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.885345 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.385318764 +0000 UTC m=+153.043354874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.970676 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:19 crc kubenswrapper[4975]: I0122 10:39:19.986509 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:19 crc kubenswrapper[4975]: E0122 10:39:19.986766 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.486750856 +0000 UTC m=+153.144786966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.087353 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.087665 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.587654502 +0000 UTC m=+153.245690612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.188503 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.188690 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.688664181 +0000 UTC m=+153.346700291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.188744 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.189023 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.689011311 +0000 UTC m=+153.347047411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.239187 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjq6r"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.240129 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.242656 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.289638 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.289826 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.789796483 +0000 UTC m=+153.447832593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.289902 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-utilities\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.289980 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.290041 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-catalog-content\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.290081 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgjb\" (UniqueName: \"kubernetes.io/projected/7ae0356f-f7ce-4d10-bef2-c322e09f9889-kube-api-access-hbgjb\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.290412 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.790394921 +0000 UTC m=+153.448431031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.300027 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjq6r"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.313187 4975 generic.go:334] "Generic (PLEG): container finished" podID="d4d8cf2a-15db-4bcc-b80c-9e9a273fca82" containerID="93c4713a750fcf7bce36e5deb375ec6995839a782d062a9ab043670c017da57a" exitCode=0 Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.313245 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" event={"ID":"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82","Type":"ContainerDied","Data":"93c4713a750fcf7bce36e5deb375ec6995839a782d062a9ab043670c017da57a"} Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.313271 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" event={"ID":"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82","Type":"ContainerStarted","Data":"12645b75d63c0fb18dcd9f83e23e5e6d8fd5b385e51470a8dfa9debcc0859be7"} Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.313280 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" event={"ID":"d4d8cf2a-15db-4bcc-b80c-9e9a273fca82","Type":"ContainerStarted","Data":"5936f71ca062d0e1b006f6b990f85d6d055961094c7c4d9de5fc868c57f42ce3"} Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.315888 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" event={"ID":"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2","Type":"ContainerStarted","Data":"53aaf193c5edb1cf6fea5b20cfeef4564520755ccff23962233cfa26e67acea4"} Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.315921 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" event={"ID":"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2","Type":"ContainerStarted","Data":"97616e970054ebc4dbd382203f40fdadffb6bb44a29ae2fbf585bd17af46bbf3"} Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.318132 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n6dj7" event={"ID":"d677fccc-b7e9-4850-83af-2246938ebf69","Type":"ContainerStarted","Data":"7be5c5210527512f6e33ebdbd8c71c7539ee5a624ec10dec206ba044f8fd20a9"} Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.318167 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.320988 4975 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gdlsh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.321047 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.326247 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkhj2" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.333467 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5vjmf" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.380509 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" podStartSLOduration=130.380493273 podStartE2EDuration="2m10.380493273s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:20.378608466 +0000 UTC m=+153.036644596" watchObservedRunningTime="2026-01-22 10:39:20.380493273 +0000 UTC m=+153.038529383" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.391502 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.391659 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.891636498 +0000 UTC m=+153.549672608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.391882 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-catalog-content\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.392169 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgjb\" (UniqueName: \"kubernetes.io/projected/7ae0356f-f7ce-4d10-bef2-c322e09f9889-kube-api-access-hbgjb\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.392246 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-utilities\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.392661 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.397280 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-catalog-content\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.397721 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-utilities\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.398984 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.898970108 +0000 UTC m=+153.557006218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.427151 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgjb\" (UniqueName: \"kubernetes.io/projected/7ae0356f-f7ce-4d10-bef2-c322e09f9889-kube-api-access-hbgjb\") pod \"community-operators-qjq6r\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.437110 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n6dj7" podStartSLOduration=9.437088876 podStartE2EDuration="9.437088876s" podCreationTimestamp="2026-01-22 10:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:20.426809416 +0000 UTC m=+153.084845536" watchObservedRunningTime="2026-01-22 10:39:20.437088876 +0000 UTC m=+153.095124986" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.449544 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:20 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:20 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:20 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.449622 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.471317 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6cc7"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.472478 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.476470 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.498011 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z6j7l" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.498266 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.498318 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:20.998304037 +0000 UTC m=+153.656340147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.507596 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.507964 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.007948437 +0000 UTC m=+153.665984547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.509372 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6cc7"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.533133 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 10:34:19 +0000 UTC, rotation deadline is 2026-11-28 15:15:28.834272793 +0000 UTC Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.533184 4975 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7444h36m8.301090777s for next certificate rotation Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.553658 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.612366 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.612610 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd57v\" (UniqueName: \"kubernetes.io/projected/27a32b6c-0030-4c5a-bdbe-61722e22e813-kube-api-access-nd57v\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.612661 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-catalog-content\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.612679 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-utilities\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.612779 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.112764211 +0000 UTC m=+153.770800321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.639235 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mx25r"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.640225 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.642853 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx25r"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.697656 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.697916 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713514 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-catalog-content\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713562 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713588 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-utilities\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713616 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd57v\" (UniqueName: \"kubernetes.io/projected/27a32b6c-0030-4c5a-bdbe-61722e22e813-kube-api-access-nd57v\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713655 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-catalog-content\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713672 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbxj\" (UniqueName: \"kubernetes.io/projected/01529967-a2f6-4fc1-936b-e99200ece5b1-kube-api-access-wjbxj\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.713692 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-utilities\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.714215 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-utilities\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.714465 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.214455231 +0000 UTC m=+153.872491341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.714779 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-catalog-content\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.745026 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd57v\" (UniqueName: \"kubernetes.io/projected/27a32b6c-0030-4c5a-bdbe-61722e22e813-kube-api-access-nd57v\") pod \"certified-operators-q6cc7\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.798182 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.814973 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.815179 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjbxj\" (UniqueName: \"kubernetes.io/projected/01529967-a2f6-4fc1-936b-e99200ece5b1-kube-api-access-wjbxj\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.815223 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-catalog-content\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.815266 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-utilities\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.815741 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-utilities\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.815821 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.31580367 +0000 UTC m=+153.973839780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.829864 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trqdn"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.830748 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.875494 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trqdn"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.906500 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjq6r"] Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.917280 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-utilities\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.917550 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6zv\" (UniqueName: \"kubernetes.io/projected/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-kube-api-access-sk6zv\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.917606 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-catalog-content\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:20 crc kubenswrapper[4975]: I0122 10:39:20.917634 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:20 crc kubenswrapper[4975]: E0122 10:39:20.917982 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.417963294 +0000 UTC m=+154.075999504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:20 crc kubenswrapper[4975]: W0122 10:39:20.948963 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ae0356f_f7ce_4d10_bef2_c322e09f9889.slice/crio-66c05d5d4ede880fbe02d45dd0b3b3309312e94ef53164c396dec5aea8927c24 WatchSource:0}: Error finding container 66c05d5d4ede880fbe02d45dd0b3b3309312e94ef53164c396dec5aea8927c24: Status 404 returned error can't find the container with id 66c05d5d4ede880fbe02d45dd0b3b3309312e94ef53164c396dec5aea8927c24 Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.009036 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6cc7"] Jan 22 10:39:21 crc kubenswrapper[4975]: W0122 10:39:21.011721 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27a32b6c_0030_4c5a_bdbe_61722e22e813.slice/crio-22f57e1dcda7b443d30e6926c571a4b16ad4d778f39cf9771bfdfe63471a61c0 WatchSource:0}: Error finding container 22f57e1dcda7b443d30e6926c571a4b16ad4d778f39cf9771bfdfe63471a61c0: Status 404 returned error can't find the container with id 22f57e1dcda7b443d30e6926c571a4b16ad4d778f39cf9771bfdfe63471a61c0 Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.018219 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.018477 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk6zv\" (UniqueName: \"kubernetes.io/projected/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-kube-api-access-sk6zv\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.018549 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-catalog-content\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.018615 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-utilities\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.019011 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-utilities\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.019125 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.519103957 +0000 UTC m=+154.177140067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.019637 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-catalog-content\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.019688 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-catalog-content\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.029161 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjbxj\" (UniqueName: \"kubernetes.io/projected/01529967-a2f6-4fc1-936b-e99200ece5b1-kube-api-access-wjbxj\") pod \"community-operators-mx25r\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.046146 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk6zv\" (UniqueName: \"kubernetes.io/projected/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-kube-api-access-sk6zv\") pod \"certified-operators-trqdn\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.097389 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.098052 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.099621 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.102647 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.102887 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.130005 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.130283 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.630271371 +0000 UTC m=+154.288307471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.166419 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.179922 4975 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.231053 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.231221 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.731192838 +0000 UTC m=+154.389228938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.232731 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.232783 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.232802 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.233101 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.733088835 +0000 UTC m=+154.391124945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.260800 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.333867 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.334088 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.334126 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.334662 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.8346351 +0000 UTC m=+154.492671210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.334732 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.375174 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.382587 4975 generic.go:334] "Generic (PLEG): container finished" podID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerID="033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365" exitCode=0 Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.382651 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6cc7" event={"ID":"27a32b6c-0030-4c5a-bdbe-61722e22e813","Type":"ContainerDied","Data":"033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365"} Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.382676 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6cc7" event={"ID":"27a32b6c-0030-4c5a-bdbe-61722e22e813","Type":"ContainerStarted","Data":"22f57e1dcda7b443d30e6926c571a4b16ad4d778f39cf9771bfdfe63471a61c0"} Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.385362 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.402412 4975 generic.go:334] "Generic (PLEG): container finished" podID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerID="624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c" exitCode=0 Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.402470 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjq6r" event={"ID":"7ae0356f-f7ce-4d10-bef2-c322e09f9889","Type":"ContainerDied","Data":"624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c"} Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.402494 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjq6r" event={"ID":"7ae0356f-f7ce-4d10-bef2-c322e09f9889","Type":"ContainerStarted","Data":"66c05d5d4ede880fbe02d45dd0b3b3309312e94ef53164c396dec5aea8927c24"} Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.411721 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" event={"ID":"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2","Type":"ContainerStarted","Data":"35ea32a6628a8877dff77ea38f51f9073901e907a5af984e392e0c60564b8b2a"} Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.419500 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.420068 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.443003 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.444538 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.445038 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:21.945026422 +0000 UTC m=+154.603062532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.445475 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:21 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:21 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:21 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.445549 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.456953 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trqdn"] Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.540106 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx25r"] Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.546712 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.549191 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:22.049171855 +0000 UTC m=+154.707207965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.649679 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.650202 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:22.150190784 +0000 UTC m=+154.808226894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.751031 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.751551 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:22.251536374 +0000 UTC m=+154.909572484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.797156 4975 patch_prober.go:28] interesting pod/apiserver-76f77b778f-62jj5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]log ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]etcd ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/max-in-flight-filter ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 10:39:21 crc kubenswrapper[4975]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 10:39:21 crc kubenswrapper[4975]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/openshift.io-startinformers ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 10:39:21 crc kubenswrapper[4975]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 10:39:21 crc kubenswrapper[4975]: livez check failed Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.797212 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" podUID="d4d8cf2a-15db-4bcc-b80c-9e9a273fca82" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.802533 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.853018 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.853370 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:22.353353927 +0000 UTC m=+155.011390037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.953951 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.954189 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:39:22.454155259 +0000 UTC m=+155.112191379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.954436 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:21 crc kubenswrapper[4975]: E0122 10:39:21.954857 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:39:22.454840961 +0000 UTC m=+155.112877071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-94jvp" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.973492 4975 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T10:39:21.179937196Z","Handler":null,"Name":""} Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.979218 4975 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 10:39:21 crc kubenswrapper[4975]: I0122 10:39:21.979257 4975 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.055207 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.060421 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.156308 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.158560 4975 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.158598 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.186467 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-94jvp\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.426192 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pg94w"] Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.428467 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:22 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:22 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:22 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.434400 4975 generic.go:334] "Generic (PLEG): container finished" podID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerID="dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0" exitCode=0 Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.450571 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.447365 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.447347008 podStartE2EDuration="1.447347008s" podCreationTimestamp="2026-01-22 10:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:22.445128702 +0000 UTC m=+155.103164812" watchObservedRunningTime="2026-01-22 10:39:22.447347008 +0000 UTC m=+155.105383118" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.452077 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg94w"] Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.452159 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94","Type":"ContainerStarted","Data":"c6696ebf81adc076286b8312b65702ce3105b859d68b62ec0b557f495bcce89f"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.452183 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94","Type":"ContainerStarted","Data":"1b43803b1b8d783cbd6ce52341503e6d8b309cf5e84e94f7bc8e9e0bb7ae8893"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.452197 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerDied","Data":"dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.452210 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerStarted","Data":"289ab4be85572844c962f818758940b06e1bc3c4abd849586c7588da5241905e"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.452263 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.458795 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.460594 4975 generic.go:334] "Generic (PLEG): container finished" podID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerID="955893a5ed7f65869f2102646eb03ebaabdbcc7862f6d6b6acf377a655e000dc" exitCode=0 Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.461390 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx25r" event={"ID":"01529967-a2f6-4fc1-936b-e99200ece5b1","Type":"ContainerDied","Data":"955893a5ed7f65869f2102646eb03ebaabdbcc7862f6d6b6acf377a655e000dc"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.461450 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx25r" event={"ID":"01529967-a2f6-4fc1-936b-e99200ece5b1","Type":"ContainerStarted","Data":"66a8ba492dc4caa04ee044d9b1bf1d0668bfb84e77f62f6ec8dc4e6a44df1568"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.474090 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" event={"ID":"4dc8f7ee-ae46-4ef9-8889-e7dfa53980f2","Type":"ContainerStarted","Data":"d357817efff276c049aee545129da3e925b89e6fca1a0e944f7d8d639eb07a12"} Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.489866 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.516145 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-68t4d" podStartSLOduration=11.516131018 podStartE2EDuration="11.516131018s" podCreationTimestamp="2026-01-22 10:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:22.515378296 +0000 UTC m=+155.173414406" watchObservedRunningTime="2026-01-22 10:39:22.516131018 +0000 UTC m=+155.174167128" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.560513 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-utilities\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.560866 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-catalog-content\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.560935 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hsg\" (UniqueName: \"kubernetes.io/projected/2ef0df15-7a94-4740-9601-0a29b068f5e7-kube-api-access-w4hsg\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.661991 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hsg\" (UniqueName: \"kubernetes.io/projected/2ef0df15-7a94-4740-9601-0a29b068f5e7-kube-api-access-w4hsg\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.662060 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-utilities\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.662155 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-catalog-content\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.662705 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-catalog-content\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.663705 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-utilities\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.693499 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hsg\" (UniqueName: \"kubernetes.io/projected/2ef0df15-7a94-4740-9601-0a29b068f5e7-kube-api-access-w4hsg\") pod \"redhat-marketplace-pg94w\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.770948 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.807842 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-94jvp"] Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.830342 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kfln6"] Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.843576 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.843677 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfln6"] Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.968360 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-utilities\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.968505 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmkk\" (UniqueName: \"kubernetes.io/projected/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-kube-api-access-xfmkk\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:22 crc kubenswrapper[4975]: I0122 10:39:22.968546 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-catalog-content\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.030144 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg94w"] Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.069512 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-catalog-content\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.069578 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-utilities\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.069626 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfmkk\" (UniqueName: \"kubernetes.io/projected/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-kube-api-access-xfmkk\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.070276 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-catalog-content\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.070411 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-utilities\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.087890 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfmkk\" (UniqueName: \"kubernetes.io/projected/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-kube-api-access-xfmkk\") pod \"redhat-marketplace-kfln6\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.170234 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.431176 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lvzvd"] Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.432673 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.433932 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:23 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:23 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:23 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.433988 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.434564 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.445874 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvzvd"] Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.493135 4975 generic.go:334] "Generic (PLEG): container finished" podID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerID="e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b" exitCode=0 Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.493443 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg94w" event={"ID":"2ef0df15-7a94-4740-9601-0a29b068f5e7","Type":"ContainerDied","Data":"e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b"} Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.493471 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg94w" event={"ID":"2ef0df15-7a94-4740-9601-0a29b068f5e7","Type":"ContainerStarted","Data":"d161ebc6643d3d4c00ce0ee687a8907475cbabdeea6b38b876f1497f1345c976"} Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.500708 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" event={"ID":"588b9cff-3186-4fc2-9498-90fd2272fb66","Type":"ContainerStarted","Data":"78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b"} Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.500749 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" event={"ID":"588b9cff-3186-4fc2-9498-90fd2272fb66","Type":"ContainerStarted","Data":"3cafaaea2a4f16d683453086c7e56462fefa2d9cd9b0fe8b63a4e95b9043c232"} Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.501287 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.503693 4975 generic.go:334] "Generic (PLEG): container finished" podID="c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94" containerID="c6696ebf81adc076286b8312b65702ce3105b859d68b62ec0b557f495bcce89f" exitCode=0 Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.503811 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94","Type":"ContainerDied","Data":"c6696ebf81adc076286b8312b65702ce3105b859d68b62ec0b557f495bcce89f"} Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.536730 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" podStartSLOduration=133.536703554 podStartE2EDuration="2m13.536703554s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:39:23.530047434 +0000 UTC m=+156.188083544" watchObservedRunningTime="2026-01-22 10:39:23.536703554 +0000 UTC m=+156.194739664" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.563655 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfln6"] Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.577859 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qwcw\" (UniqueName: \"kubernetes.io/projected/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-kube-api-access-4qwcw\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.578027 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-catalog-content\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.578128 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-utilities\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.680123 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-catalog-content\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.680217 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-utilities\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.680283 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qwcw\" (UniqueName: \"kubernetes.io/projected/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-kube-api-access-4qwcw\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.680982 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-catalog-content\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.681087 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-utilities\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.697315 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qwcw\" (UniqueName: \"kubernetes.io/projected/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-kube-api-access-4qwcw\") pod \"redhat-operators-lvzvd\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.764296 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.793668 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.834682 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6p8c"] Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.838932 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.857717 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6p8c"] Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.882922 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-catalog-content\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.883007 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-utilities\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.883031 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hzv\" (UniqueName: \"kubernetes.io/projected/cece8116-dff3-488a-8ab2-0f25754d0609-kube-api-access-s4hzv\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.985234 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-catalog-content\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.985500 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-utilities\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.985527 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hzv\" (UniqueName: \"kubernetes.io/projected/cece8116-dff3-488a-8ab2-0f25754d0609-kube-api-access-s4hzv\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.986431 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-utilities\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:23 crc kubenswrapper[4975]: I0122 10:39:23.986768 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-catalog-content\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.003667 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hzv\" (UniqueName: \"kubernetes.io/projected/cece8116-dff3-488a-8ab2-0f25754d0609-kube-api-access-s4hzv\") pod \"redhat-operators-h6p8c\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.042121 4975 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptf2x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.042166 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ptf2x" podUID="62a0f3fd-9bc0-4505-999b-2cc156ebd430" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.042756 4975 patch_prober.go:28] interesting pod/downloads-7954f5f757-ptf2x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.042805 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ptf2x" podUID="62a0f3fd-9bc0-4505-999b-2cc156ebd430" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.072589 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvzvd"] Jan 22 10:39:24 crc kubenswrapper[4975]: W0122 10:39:24.084867 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bb39999_a9af_4a3b_bc78_427d76a9fbd9.slice/crio-64f07a25ca32f27875b0dd29a01c6b103eb86c2d2dd859b8cd9ec130fa411d55 WatchSource:0}: Error finding container 64f07a25ca32f27875b0dd29a01c6b103eb86c2d2dd859b8cd9ec130fa411d55: Status 404 returned error can't find the container with id 64f07a25ca32f27875b0dd29a01c6b103eb86c2d2dd859b8cd9ec130fa411d55 Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.102249 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.203115 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.275453 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.275770 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.280860 4975 patch_prober.go:28] interesting pod/console-f9d7485db-7pt74 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.280910 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7pt74" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.425867 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.434420 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:24 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:24 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:24 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.434472 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.501866 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6p8c"] Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.545407 4975 generic.go:334] "Generic (PLEG): container finished" podID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerID="4448d934175e8e9c33221302b0c02ee94f774b1b61b693d0039521857ab02508" exitCode=0 Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.545714 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerDied","Data":"4448d934175e8e9c33221302b0c02ee94f774b1b61b693d0039521857ab02508"} Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.545785 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerStarted","Data":"6c0db3e7d802ca7476cc55c25a6c8bfe022259fbc1c0060fa274cdce1f1febfe"} Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.547183 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerStarted","Data":"64f07a25ca32f27875b0dd29a01c6b103eb86c2d2dd859b8cd9ec130fa411d55"} Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.551281 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" event={"ID":"81fdabfd-7410-4ca7-855b-c1764b58b808","Type":"ContainerDied","Data":"3e6dbcc74f3cd802ae759a53a0805606d483b01b1e8bdea3ec3f7529c9411dbe"} Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.551247 4975 generic.go:334] "Generic (PLEG): container finished" podID="81fdabfd-7410-4ca7-855b-c1764b58b808" containerID="3e6dbcc74f3cd802ae759a53a0805606d483b01b1e8bdea3ec3f7529c9411dbe" exitCode=0 Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.882057 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.910562 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kubelet-dir\") pod \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.910672 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94" (UID: "c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.910708 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kube-api-access\") pod \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\" (UID: \"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94\") " Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.911145 4975 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:24 crc kubenswrapper[4975]: I0122 10:39:24.916382 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94" (UID: "c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.012196 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.430163 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:25 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:25 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:25 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.430215 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.558739 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.558734 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94","Type":"ContainerDied","Data":"1b43803b1b8d783cbd6ce52341503e6d8b309cf5e84e94f7bc8e9e0bb7ae8893"} Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.559165 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b43803b1b8d783cbd6ce52341503e6d8b309cf5e84e94f7bc8e9e0bb7ae8893" Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.560418 4975 generic.go:334] "Generic (PLEG): container finished" podID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerID="49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217" exitCode=0 Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.560477 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerDied","Data":"49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217"} Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.569429 4975 generic.go:334] "Generic (PLEG): container finished" podID="cece8116-dff3-488a-8ab2-0f25754d0609" containerID="01169094dc43575fbab5c56517c0b9b3eaa3855d1e990828ef068fdefc2afedc" exitCode=0 Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.569560 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6p8c" event={"ID":"cece8116-dff3-488a-8ab2-0f25754d0609","Type":"ContainerDied","Data":"01169094dc43575fbab5c56517c0b9b3eaa3855d1e990828ef068fdefc2afedc"} Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.569616 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6p8c" event={"ID":"cece8116-dff3-488a-8ab2-0f25754d0609","Type":"ContainerStarted","Data":"497e1bc29196d003ddc1fe3cb62e6c43d19c6d17f9a6f23a4464eb3e744fa352"} Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.699298 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:25 crc kubenswrapper[4975]: I0122 10:39:25.711370 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-62jj5" Jan 22 10:39:26 crc kubenswrapper[4975]: I0122 10:39:26.427822 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:26 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:26 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:26 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:26 crc kubenswrapper[4975]: I0122 10:39:26.427885 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.260549 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 10:39:27 crc kubenswrapper[4975]: E0122 10:39:27.261597 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94" containerName="pruner" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.265612 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94" containerName="pruner" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.265759 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e9ee6a-bcff-4bc3-9e65-6d67a905bd94" containerName="pruner" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.270324 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.272177 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.272920 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.272989 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.342365 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/691111ab-d285-4e41-95e0-f9500ab4ba67-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.342492 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/691111ab-d285-4e41-95e0-f9500ab4ba67-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.427522 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:27 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:27 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:27 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.427569 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.437097 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.437170 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.443487 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/691111ab-d285-4e41-95e0-f9500ab4ba67-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.443545 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/691111ab-d285-4e41-95e0-f9500ab4ba67-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.443624 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/691111ab-d285-4e41-95e0-f9500ab4ba67-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.478600 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/691111ab-d285-4e41-95e0-f9500ab4ba67-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:27 crc kubenswrapper[4975]: I0122 10:39:27.602439 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:28 crc kubenswrapper[4975]: I0122 10:39:28.427739 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:28 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:28 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:28 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:28 crc kubenswrapper[4975]: I0122 10:39:28.427802 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:29 crc kubenswrapper[4975]: I0122 10:39:29.428357 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:29 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:29 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:29 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:29 crc kubenswrapper[4975]: I0122 10:39:29.428696 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:30 crc kubenswrapper[4975]: I0122 10:39:30.243769 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n6dj7" Jan 22 10:39:30 crc kubenswrapper[4975]: I0122 10:39:30.427634 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:30 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:30 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:30 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:30 crc kubenswrapper[4975]: I0122 10:39:30.427746 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:31 crc kubenswrapper[4975]: I0122 10:39:31.431969 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:31 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:31 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:31 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:31 crc kubenswrapper[4975]: I0122 10:39:31.432200 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:32 crc kubenswrapper[4975]: I0122 10:39:32.427954 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:32 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:32 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:32 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:32 crc kubenswrapper[4975]: I0122 10:39:32.428249 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.275562 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.285222 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50be08c7-0f3c-47b2-b15c-94dea396c9e1-metrics-certs\") pod \"network-metrics-daemon-v5jbd\" (UID: \"50be08c7-0f3c-47b2-b15c-94dea396c9e1\") " pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.408357 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.428903 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:33 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:33 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:33 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.428972 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.579195 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5jbd" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.579316 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29jpx\" (UniqueName: \"kubernetes.io/projected/81fdabfd-7410-4ca7-855b-c1764b58b808-kube-api-access-29jpx\") pod \"81fdabfd-7410-4ca7-855b-c1764b58b808\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.579470 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume\") pod \"81fdabfd-7410-4ca7-855b-c1764b58b808\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.579505 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume\") pod \"81fdabfd-7410-4ca7-855b-c1764b58b808\" (UID: \"81fdabfd-7410-4ca7-855b-c1764b58b808\") " Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.580591 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume" (OuterVolumeSpecName: "config-volume") pod "81fdabfd-7410-4ca7-855b-c1764b58b808" (UID: "81fdabfd-7410-4ca7-855b-c1764b58b808"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.584242 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81fdabfd-7410-4ca7-855b-c1764b58b808" (UID: "81fdabfd-7410-4ca7-855b-c1764b58b808"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.585005 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fdabfd-7410-4ca7-855b-c1764b58b808-kube-api-access-29jpx" (OuterVolumeSpecName: "kube-api-access-29jpx") pod "81fdabfd-7410-4ca7-855b-c1764b58b808" (UID: "81fdabfd-7410-4ca7-855b-c1764b58b808"). InnerVolumeSpecName "kube-api-access-29jpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.681070 4975 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81fdabfd-7410-4ca7-855b-c1764b58b808-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.681120 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81fdabfd-7410-4ca7-855b-c1764b58b808-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.681139 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29jpx\" (UniqueName: \"kubernetes.io/projected/81fdabfd-7410-4ca7-855b-c1764b58b808-kube-api-access-29jpx\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.711268 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" event={"ID":"81fdabfd-7410-4ca7-855b-c1764b58b808","Type":"ContainerDied","Data":"e7ab00ec4093a3429e3cafc6fcc4a412a76a3730f3239656a2795f3bb902ecba"} Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.711319 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ab00ec4093a3429e3cafc6fcc4a412a76a3730f3239656a2795f3bb902ecba" Jan 22 10:39:33 crc kubenswrapper[4975]: I0122 10:39:33.711406 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z" Jan 22 10:39:34 crc kubenswrapper[4975]: I0122 10:39:34.048442 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ptf2x" Jan 22 10:39:34 crc kubenswrapper[4975]: I0122 10:39:34.274462 4975 patch_prober.go:28] interesting pod/console-f9d7485db-7pt74 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 22 10:39:34 crc kubenswrapper[4975]: I0122 10:39:34.274536 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7pt74" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 22 10:39:34 crc kubenswrapper[4975]: I0122 10:39:34.427565 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:34 crc kubenswrapper[4975]: [-]has-synced failed: reason withheld Jan 22 10:39:34 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:34 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:34 crc kubenswrapper[4975]: I0122 10:39:34.427652 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:34 crc kubenswrapper[4975]: I0122 10:39:34.975231 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v5jbd"] Jan 22 10:39:35 crc kubenswrapper[4975]: I0122 10:39:35.027110 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 10:39:35 crc kubenswrapper[4975]: W0122 10:39:35.227369 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod691111ab_d285_4e41_95e0_f9500ab4ba67.slice/crio-354714242c4c1653d4553c132a265584e5acd7ea6767cf76bdc77cbc4f23be8a WatchSource:0}: Error finding container 354714242c4c1653d4553c132a265584e5acd7ea6767cf76bdc77cbc4f23be8a: Status 404 returned error can't find the container with id 354714242c4c1653d4553c132a265584e5acd7ea6767cf76bdc77cbc4f23be8a Jan 22 10:39:35 crc kubenswrapper[4975]: I0122 10:39:35.430524 4975 patch_prober.go:28] interesting pod/router-default-5444994796-2jxxh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:39:35 crc kubenswrapper[4975]: [+]has-synced ok Jan 22 10:39:35 crc kubenswrapper[4975]: [+]process-running ok Jan 22 10:39:35 crc kubenswrapper[4975]: healthz check failed Jan 22 10:39:35 crc kubenswrapper[4975]: I0122 10:39:35.430739 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2jxxh" podUID="3ade24ae-bd68-4ce7-ae8c-de8f35a850d9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:39:35 crc kubenswrapper[4975]: I0122 10:39:35.724723 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" event={"ID":"50be08c7-0f3c-47b2-b15c-94dea396c9e1","Type":"ContainerStarted","Data":"3a22b59c87223dff5ad70ee4bbcb3ad82f0624ad3eb147382ce21ce32503faf1"} Jan 22 10:39:35 crc kubenswrapper[4975]: I0122 10:39:35.726471 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"691111ab-d285-4e41-95e0-f9500ab4ba67","Type":"ContainerStarted","Data":"354714242c4c1653d4553c132a265584e5acd7ea6767cf76bdc77cbc4f23be8a"} Jan 22 10:39:36 crc kubenswrapper[4975]: I0122 10:39:36.428847 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:36 crc kubenswrapper[4975]: I0122 10:39:36.432725 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2jxxh" Jan 22 10:39:37 crc kubenswrapper[4975]: I0122 10:39:37.743131 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" event={"ID":"50be08c7-0f3c-47b2-b15c-94dea396c9e1","Type":"ContainerStarted","Data":"006393b0f10059f94e296cc026b2610dad2850c16d8f6b3efee2e2ff01e072df"} Jan 22 10:39:37 crc kubenswrapper[4975]: I0122 10:39:37.745099 4975 generic.go:334] "Generic (PLEG): container finished" podID="691111ab-d285-4e41-95e0-f9500ab4ba67" containerID="c6f604df1b96ac1fd399de5c593aa68c9a786d995d15371bc283a7382b24d178" exitCode=0 Jan 22 10:39:37 crc kubenswrapper[4975]: I0122 10:39:37.745134 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"691111ab-d285-4e41-95e0-f9500ab4ba67","Type":"ContainerDied","Data":"c6f604df1b96ac1fd399de5c593aa68c9a786d995d15371bc283a7382b24d178"} Jan 22 10:39:42 crc kubenswrapper[4975]: I0122 10:39:42.500933 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.281537 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.289588 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.614301 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.689720 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/691111ab-d285-4e41-95e0-f9500ab4ba67-kubelet-dir\") pod \"691111ab-d285-4e41-95e0-f9500ab4ba67\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.689807 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/691111ab-d285-4e41-95e0-f9500ab4ba67-kube-api-access\") pod \"691111ab-d285-4e41-95e0-f9500ab4ba67\" (UID: \"691111ab-d285-4e41-95e0-f9500ab4ba67\") " Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.689832 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/691111ab-d285-4e41-95e0-f9500ab4ba67-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "691111ab-d285-4e41-95e0-f9500ab4ba67" (UID: "691111ab-d285-4e41-95e0-f9500ab4ba67"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.690054 4975 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/691111ab-d285-4e41-95e0-f9500ab4ba67-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.703502 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691111ab-d285-4e41-95e0-f9500ab4ba67-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "691111ab-d285-4e41-95e0-f9500ab4ba67" (UID: "691111ab-d285-4e41-95e0-f9500ab4ba67"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.790899 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/691111ab-d285-4e41-95e0-f9500ab4ba67-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.801112 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"691111ab-d285-4e41-95e0-f9500ab4ba67","Type":"ContainerDied","Data":"354714242c4c1653d4553c132a265584e5acd7ea6767cf76bdc77cbc4f23be8a"} Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.801184 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354714242c4c1653d4553c132a265584e5acd7ea6767cf76bdc77cbc4f23be8a" Jan 22 10:39:44 crc kubenswrapper[4975]: I0122 10:39:44.801121 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:39:54 crc kubenswrapper[4975]: I0122 10:39:54.297671 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2ncxd" Jan 22 10:39:55 crc kubenswrapper[4975]: I0122 10:39:55.466221 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:39:57 crc kubenswrapper[4975]: E0122 10:39:57.421899 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 10:39:57 crc kubenswrapper[4975]: E0122 10:39:57.422520 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nd57v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q6cc7_openshift-marketplace(27a32b6c-0030-4c5a-bdbe-61722e22e813): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:39:57 crc kubenswrapper[4975]: E0122 10:39:57.423694 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q6cc7" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" Jan 22 10:39:57 crc kubenswrapper[4975]: I0122 10:39:57.436304 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:39:57 crc kubenswrapper[4975]: I0122 10:39:57.436369 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:39:58 crc kubenswrapper[4975]: E0122 10:39:58.711789 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q6cc7" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" Jan 22 10:39:58 crc kubenswrapper[4975]: E0122 10:39:58.767597 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 10:39:58 crc kubenswrapper[4975]: E0122 10:39:58.767781 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbgjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qjq6r_openshift-marketplace(7ae0356f-f7ce-4d10-bef2-c322e09f9889): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:39:58 crc kubenswrapper[4975]: E0122 10:39:58.769019 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qjq6r" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.834998 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qjq6r" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.897101 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.897314 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4hsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pg94w_openshift-marketplace(2ef0df15-7a94-4740-9601-0a29b068f5e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.898678 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pg94w" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.923962 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.924114 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjbxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mx25r_openshift-marketplace(01529967-a2f6-4fc1-936b-e99200ece5b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.925276 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mx25r" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.928597 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.928728 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfmkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kfln6_openshift-marketplace(40aeb5cf-bc56-4893-bc39-c9abc3f9e304): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.929806 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kfln6" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.952213 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.952384 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk6zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-trqdn_openshift-marketplace(ac08af2a-139e-4898-9020-cb0f4cd0b1e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:39:59 crc kubenswrapper[4975]: E0122 10:39:59.953938 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-trqdn" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.675728 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pg94w" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.675760 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kfln6" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.675748 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mx25r" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.675897 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-trqdn" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.723579 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.723722 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4hzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-h6p8c_openshift-marketplace(cece8116-dff3-488a-8ab2-0f25754d0609): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.724942 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-h6p8c" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.741897 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.742078 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qwcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lvzvd_openshift-marketplace(9bb39999-a9af-4a3b-bc78-427d76a9fbd9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.743364 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lvzvd" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.948435 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-h6p8c" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" Jan 22 10:40:03 crc kubenswrapper[4975]: E0122 10:40:03.948909 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lvzvd" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" Jan 22 10:40:04 crc kubenswrapper[4975]: I0122 10:40:04.951526 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v5jbd" event={"ID":"50be08c7-0f3c-47b2-b15c-94dea396c9e1","Type":"ContainerStarted","Data":"51ec38dad9307ea9325ce66d2804e006759e98247916ea3bd4d4ba132ec9cab6"} Jan 22 10:40:04 crc kubenswrapper[4975]: I0122 10:40:04.968743 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-v5jbd" podStartSLOduration=174.968719855 podStartE2EDuration="2m54.968719855s" podCreationTimestamp="2026-01-22 10:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:40:04.965258741 +0000 UTC m=+197.623294861" watchObservedRunningTime="2026-01-22 10:40:04.968719855 +0000 UTC m=+197.626756005" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.861018 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 10:40:05 crc kubenswrapper[4975]: E0122 10:40:05.861297 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fdabfd-7410-4ca7-855b-c1764b58b808" containerName="collect-profiles" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.861309 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fdabfd-7410-4ca7-855b-c1764b58b808" containerName="collect-profiles" Jan 22 10:40:05 crc kubenswrapper[4975]: E0122 10:40:05.861321 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691111ab-d285-4e41-95e0-f9500ab4ba67" containerName="pruner" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.861341 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="691111ab-d285-4e41-95e0-f9500ab4ba67" containerName="pruner" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.861431 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fdabfd-7410-4ca7-855b-c1764b58b808" containerName="collect-profiles" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.861447 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="691111ab-d285-4e41-95e0-f9500ab4ba67" containerName="pruner" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.861834 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.863907 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.864287 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.901759 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.986714 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:05 crc kubenswrapper[4975]: I0122 10:40:05.986786 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.088308 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.088408 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.088528 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.119561 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.194404 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.652094 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 10:40:06 crc kubenswrapper[4975]: W0122 10:40:06.658710 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f155fce_ab5b_4af7_9f61_eb5788605aaa.slice/crio-12876fe38f1557a134eed07863ce741917e13480e52e7e320923c145c546b4d2 WatchSource:0}: Error finding container 12876fe38f1557a134eed07863ce741917e13480e52e7e320923c145c546b4d2: Status 404 returned error can't find the container with id 12876fe38f1557a134eed07863ce741917e13480e52e7e320923c145c546b4d2 Jan 22 10:40:06 crc kubenswrapper[4975]: I0122 10:40:06.961595 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f155fce-ab5b-4af7-9f61-eb5788605aaa","Type":"ContainerStarted","Data":"12876fe38f1557a134eed07863ce741917e13480e52e7e320923c145c546b4d2"} Jan 22 10:40:07 crc kubenswrapper[4975]: I0122 10:40:07.967919 4975 generic.go:334] "Generic (PLEG): container finished" podID="3f155fce-ab5b-4af7-9f61-eb5788605aaa" containerID="0d65eddafecf9a199a11ecc9441b258b9ff52aab40dc26135ee594454e9d4234" exitCode=0 Jan 22 10:40:07 crc kubenswrapper[4975]: I0122 10:40:07.968002 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f155fce-ab5b-4af7-9f61-eb5788605aaa","Type":"ContainerDied","Data":"0d65eddafecf9a199a11ecc9441b258b9ff52aab40dc26135ee594454e9d4234"} Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.239883 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.431373 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kubelet-dir\") pod \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.431494 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kube-api-access\") pod \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\" (UID: \"3f155fce-ab5b-4af7-9f61-eb5788605aaa\") " Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.431540 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f155fce-ab5b-4af7-9f61-eb5788605aaa" (UID: "3f155fce-ab5b-4af7-9f61-eb5788605aaa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.431746 4975 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.438950 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f155fce-ab5b-4af7-9f61-eb5788605aaa" (UID: "3f155fce-ab5b-4af7-9f61-eb5788605aaa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.532809 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f155fce-ab5b-4af7-9f61-eb5788605aaa-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.978579 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f155fce-ab5b-4af7-9f61-eb5788605aaa","Type":"ContainerDied","Data":"12876fe38f1557a134eed07863ce741917e13480e52e7e320923c145c546b4d2"} Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.978616 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12876fe38f1557a134eed07863ce741917e13480e52e7e320923c145c546b4d2" Jan 22 10:40:09 crc kubenswrapper[4975]: I0122 10:40:09.978652 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.654763 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 10:40:10 crc kubenswrapper[4975]: E0122 10:40:10.655170 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f155fce-ab5b-4af7-9f61-eb5788605aaa" containerName="pruner" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.655182 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f155fce-ab5b-4af7-9f61-eb5788605aaa" containerName="pruner" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.655268 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f155fce-ab5b-4af7-9f61-eb5788605aaa" containerName="pruner" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.655586 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.664774 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.664978 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.667871 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.747856 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.747897 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0877a81-c73b-4269-b7d4-14834d43e46e-kube-api-access\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.748030 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-var-lock\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.848544 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.848590 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0877a81-c73b-4269-b7d4-14834d43e46e-kube-api-access\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.848644 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-var-lock\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.848710 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-var-lock\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.848744 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.864520 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0877a81-c73b-4269-b7d4-14834d43e46e-kube-api-access\") pod \"installer-9-crc\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.984666 4975 generic.go:334] "Generic (PLEG): container finished" podID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerID="3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447" exitCode=0 Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.984709 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6cc7" event={"ID":"27a32b6c-0030-4c5a-bdbe-61722e22e813","Type":"ContainerDied","Data":"3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447"} Jan 22 10:40:10 crc kubenswrapper[4975]: I0122 10:40:10.988047 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:11 crc kubenswrapper[4975]: I0122 10:40:11.388584 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 10:40:11 crc kubenswrapper[4975]: I0122 10:40:11.992293 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6cc7" event={"ID":"27a32b6c-0030-4c5a-bdbe-61722e22e813","Type":"ContainerStarted","Data":"bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1"} Jan 22 10:40:11 crc kubenswrapper[4975]: I0122 10:40:11.993279 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a0877a81-c73b-4269-b7d4-14834d43e46e","Type":"ContainerStarted","Data":"76175fb6195a1498e121cecd59ffefd4b2a32e0d835c7f9484546abe97ff2357"} Jan 22 10:40:11 crc kubenswrapper[4975]: I0122 10:40:11.993353 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a0877a81-c73b-4269-b7d4-14834d43e46e","Type":"ContainerStarted","Data":"3fd66d1d1c415f2dd7ee5728670ec1808945096799f7e9c807499addbc947f1e"} Jan 22 10:40:12 crc kubenswrapper[4975]: I0122 10:40:12.008824 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6cc7" podStartSLOduration=2.008005333 podStartE2EDuration="52.008806999s" podCreationTimestamp="2026-01-22 10:39:20 +0000 UTC" firstStartedPulling="2026-01-22 10:39:21.385051227 +0000 UTC m=+154.043087327" lastFinishedPulling="2026-01-22 10:40:11.385852883 +0000 UTC m=+204.043888993" observedRunningTime="2026-01-22 10:40:12.008291664 +0000 UTC m=+204.666327814" watchObservedRunningTime="2026-01-22 10:40:12.008806999 +0000 UTC m=+204.666843109" Jan 22 10:40:12 crc kubenswrapper[4975]: I0122 10:40:12.022217 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.022201766 podStartE2EDuration="2.022201766s" podCreationTimestamp="2026-01-22 10:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:40:12.020744431 +0000 UTC m=+204.678780551" watchObservedRunningTime="2026-01-22 10:40:12.022201766 +0000 UTC m=+204.680237876" Jan 22 10:40:17 crc kubenswrapper[4975]: I0122 10:40:17.028549 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerStarted","Data":"16a3ec72da41f5a2d3cf8969518318718edd197760fb3f37cbf29c81fa5d196a"} Jan 22 10:40:17 crc kubenswrapper[4975]: I0122 10:40:17.032732 4975 generic.go:334] "Generic (PLEG): container finished" podID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerID="f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e" exitCode=0 Jan 22 10:40:17 crc kubenswrapper[4975]: I0122 10:40:17.032777 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjq6r" event={"ID":"7ae0356f-f7ce-4d10-bef2-c322e09f9889","Type":"ContainerDied","Data":"f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e"} Jan 22 10:40:17 crc kubenswrapper[4975]: I0122 10:40:17.037390 4975 generic.go:334] "Generic (PLEG): container finished" podID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerID="7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4" exitCode=0 Jan 22 10:40:17 crc kubenswrapper[4975]: I0122 10:40:17.037433 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg94w" event={"ID":"2ef0df15-7a94-4740-9601-0a29b068f5e7","Type":"ContainerDied","Data":"7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4"} Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.044835 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg94w" event={"ID":"2ef0df15-7a94-4740-9601-0a29b068f5e7","Type":"ContainerStarted","Data":"2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed"} Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.048914 4975 generic.go:334] "Generic (PLEG): container finished" podID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerID="9ca2c21aab5c98613ac311524c16a3c53d4718352064b3a255b51512cf1a3255" exitCode=0 Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.049003 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx25r" event={"ID":"01529967-a2f6-4fc1-936b-e99200ece5b1","Type":"ContainerDied","Data":"9ca2c21aab5c98613ac311524c16a3c53d4718352064b3a255b51512cf1a3255"} Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.052971 4975 generic.go:334] "Generic (PLEG): container finished" podID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerID="16a3ec72da41f5a2d3cf8969518318718edd197760fb3f37cbf29c81fa5d196a" exitCode=0 Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.053034 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerDied","Data":"16a3ec72da41f5a2d3cf8969518318718edd197760fb3f37cbf29c81fa5d196a"} Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.055262 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjq6r" event={"ID":"7ae0356f-f7ce-4d10-bef2-c322e09f9889","Type":"ContainerStarted","Data":"39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912"} Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.093943 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pg94w" podStartSLOduration=2.11876756 podStartE2EDuration="56.093921761s" podCreationTimestamp="2026-01-22 10:39:22 +0000 UTC" firstStartedPulling="2026-01-22 10:39:23.495444454 +0000 UTC m=+156.153480564" lastFinishedPulling="2026-01-22 10:40:17.470598655 +0000 UTC m=+210.128634765" observedRunningTime="2026-01-22 10:40:18.0704843 +0000 UTC m=+210.728520440" watchObservedRunningTime="2026-01-22 10:40:18.093921761 +0000 UTC m=+210.751957871" Jan 22 10:40:18 crc kubenswrapper[4975]: I0122 10:40:18.122406 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjq6r" podStartSLOduration=1.971981162 podStartE2EDuration="58.122389177s" podCreationTimestamp="2026-01-22 10:39:20 +0000 UTC" firstStartedPulling="2026-01-22 10:39:21.407190223 +0000 UTC m=+154.065226333" lastFinishedPulling="2026-01-22 10:40:17.557598238 +0000 UTC m=+210.215634348" observedRunningTime="2026-01-22 10:40:18.121945133 +0000 UTC m=+210.779981233" watchObservedRunningTime="2026-01-22 10:40:18.122389177 +0000 UTC m=+210.780425297" Jan 22 10:40:20 crc kubenswrapper[4975]: I0122 10:40:20.554306 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:40:20 crc kubenswrapper[4975]: I0122 10:40:20.554779 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:40:20 crc kubenswrapper[4975]: I0122 10:40:20.798360 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:40:20 crc kubenswrapper[4975]: I0122 10:40:20.798634 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:40:20 crc kubenswrapper[4975]: I0122 10:40:20.932993 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:40:20 crc kubenswrapper[4975]: I0122 10:40:20.933300 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:40:21 crc kubenswrapper[4975]: I0122 10:40:21.120172 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:40:22 crc kubenswrapper[4975]: I0122 10:40:22.772489 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:40:22 crc kubenswrapper[4975]: I0122 10:40:22.772898 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:40:22 crc kubenswrapper[4975]: I0122 10:40:22.817054 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:40:23 crc kubenswrapper[4975]: I0122 10:40:23.154301 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:40:27 crc kubenswrapper[4975]: I0122 10:40:27.436716 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:40:27 crc kubenswrapper[4975]: I0122 10:40:27.437220 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:40:27 crc kubenswrapper[4975]: I0122 10:40:27.437316 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:40:27 crc kubenswrapper[4975]: I0122 10:40:27.438384 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:40:27 crc kubenswrapper[4975]: I0122 10:40:27.438543 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2" gracePeriod=600 Jan 22 10:40:29 crc kubenswrapper[4975]: I0122 10:40:29.120351 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2" exitCode=0 Jan 22 10:40:29 crc kubenswrapper[4975]: I0122 10:40:29.120487 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.131241 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerStarted","Data":"6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.133729 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerStarted","Data":"2d27b4add0cde2a7cb7fe5fc45a6ae27dba1f011798e20dd147579774aa5980f"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.137289 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"eda945b61af510720457294a7eb7909c6ec1747b6bf8c3923093ea32bb4739ef"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.138071 4975 generic.go:334] "Generic (PLEG): container finished" podID="cece8116-dff3-488a-8ab2-0f25754d0609" containerID="9d81a18cf061484f173d9a29796012eeb1249585fd419607a04e41770ce75aee" exitCode=0 Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.138130 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6p8c" event={"ID":"cece8116-dff3-488a-8ab2-0f25754d0609","Type":"ContainerDied","Data":"9d81a18cf061484f173d9a29796012eeb1249585fd419607a04e41770ce75aee"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.145318 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerStarted","Data":"99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.155622 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx25r" event={"ID":"01529967-a2f6-4fc1-936b-e99200ece5b1","Type":"ContainerStarted","Data":"978c7fe042f46c8b6e5b8869c0d8e706c2cfb863ada61e1a411d84f70a84d6a7"} Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.185620 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mx25r" podStartSLOduration=3.894871002 podStartE2EDuration="1m10.185601121s" podCreationTimestamp="2026-01-22 10:39:20 +0000 UTC" firstStartedPulling="2026-01-22 10:39:22.461986769 +0000 UTC m=+155.120022879" lastFinishedPulling="2026-01-22 10:40:28.752716848 +0000 UTC m=+221.410752998" observedRunningTime="2026-01-22 10:40:30.184350692 +0000 UTC m=+222.842386812" watchObservedRunningTime="2026-01-22 10:40:30.185601121 +0000 UTC m=+222.843637231" Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.260062 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kfln6" podStartSLOduration=4.135070971 podStartE2EDuration="1m8.260039142s" podCreationTimestamp="2026-01-22 10:39:22 +0000 UTC" firstStartedPulling="2026-01-22 10:39:24.610905745 +0000 UTC m=+157.268941855" lastFinishedPulling="2026-01-22 10:40:28.735873866 +0000 UTC m=+221.393910026" observedRunningTime="2026-01-22 10:40:30.256506924 +0000 UTC m=+222.914543034" watchObservedRunningTime="2026-01-22 10:40:30.260039142 +0000 UTC m=+222.918075252" Jan 22 10:40:30 crc kubenswrapper[4975]: I0122 10:40:30.601399 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.161841 4975 generic.go:334] "Generic (PLEG): container finished" podID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerID="99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef" exitCode=0 Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.161907 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerDied","Data":"99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef"} Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.163262 4975 generic.go:334] "Generic (PLEG): container finished" podID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerID="6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9" exitCode=0 Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.163345 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerDied","Data":"6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9"} Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.166784 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6p8c" event={"ID":"cece8116-dff3-488a-8ab2-0f25754d0609","Type":"ContainerStarted","Data":"df5d039de94e5b37f6b133a68a92d10ddff22e0079f96bd59e26f0bc13f8d4ff"} Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.200268 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6p8c" podStartSLOduration=3.090909177 podStartE2EDuration="1m8.200251687s" podCreationTimestamp="2026-01-22 10:39:23 +0000 UTC" firstStartedPulling="2026-01-22 10:39:25.571064953 +0000 UTC m=+158.229101063" lastFinishedPulling="2026-01-22 10:40:30.680407463 +0000 UTC m=+223.338443573" observedRunningTime="2026-01-22 10:40:31.197692279 +0000 UTC m=+223.855728409" watchObservedRunningTime="2026-01-22 10:40:31.200251687 +0000 UTC m=+223.858287797" Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.261803 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:40:31 crc kubenswrapper[4975]: I0122 10:40:31.261889 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:40:32 crc kubenswrapper[4975]: I0122 10:40:32.174212 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerStarted","Data":"fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358"} Jan 22 10:40:32 crc kubenswrapper[4975]: I0122 10:40:32.306720 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mx25r" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="registry-server" probeResult="failure" output=< Jan 22 10:40:32 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 10:40:32 crc kubenswrapper[4975]: > Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.170890 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.171395 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.192232 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerStarted","Data":"e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3"} Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.226694 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lvzvd" podStartSLOduration=3.7808062590000002 podStartE2EDuration="1m10.226663101s" podCreationTimestamp="2026-01-22 10:39:23 +0000 UTC" firstStartedPulling="2026-01-22 10:39:25.562591558 +0000 UTC m=+158.220627668" lastFinishedPulling="2026-01-22 10:40:32.00844839 +0000 UTC m=+224.666484510" observedRunningTime="2026-01-22 10:40:33.226482575 +0000 UTC m=+225.884518745" watchObservedRunningTime="2026-01-22 10:40:33.226663101 +0000 UTC m=+225.884699241" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.227766 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trqdn" podStartSLOduration=3.76125044 podStartE2EDuration="1m13.227751485s" podCreationTimestamp="2026-01-22 10:39:20 +0000 UTC" firstStartedPulling="2026-01-22 10:39:22.436040238 +0000 UTC m=+155.094076348" lastFinishedPulling="2026-01-22 10:40:31.902541243 +0000 UTC m=+224.560577393" observedRunningTime="2026-01-22 10:40:32.207725114 +0000 UTC m=+224.865761224" watchObservedRunningTime="2026-01-22 10:40:33.227751485 +0000 UTC m=+225.885787625" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.244101 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.794287 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.794378 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:40:33 crc kubenswrapper[4975]: I0122 10:40:33.966433 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7fpw"] Jan 22 10:40:34 crc kubenswrapper[4975]: I0122 10:40:34.204246 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:40:34 crc kubenswrapper[4975]: I0122 10:40:34.205357 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:40:34 crc kubenswrapper[4975]: I0122 10:40:34.864741 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lvzvd" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="registry-server" probeResult="failure" output=< Jan 22 10:40:34 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 10:40:34 crc kubenswrapper[4975]: > Jan 22 10:40:35 crc kubenswrapper[4975]: I0122 10:40:35.246001 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6p8c" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="registry-server" probeResult="failure" output=< Jan 22 10:40:35 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 10:40:35 crc kubenswrapper[4975]: > Jan 22 10:40:41 crc kubenswrapper[4975]: I0122 10:40:41.167319 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:40:41 crc kubenswrapper[4975]: I0122 10:40:41.167825 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:40:41 crc kubenswrapper[4975]: I0122 10:40:41.232390 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:40:41 crc kubenswrapper[4975]: I0122 10:40:41.308556 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:40:41 crc kubenswrapper[4975]: I0122 10:40:41.337516 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:40:41 crc kubenswrapper[4975]: I0122 10:40:41.383580 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:40:42 crc kubenswrapper[4975]: I0122 10:40:42.583975 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx25r"] Jan 22 10:40:43 crc kubenswrapper[4975]: I0122 10:40:43.212100 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:40:43 crc kubenswrapper[4975]: I0122 10:40:43.250297 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mx25r" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="registry-server" containerID="cri-o://978c7fe042f46c8b6e5b8869c0d8e706c2cfb863ada61e1a411d84f70a84d6a7" gracePeriod=2 Jan 22 10:40:43 crc kubenswrapper[4975]: I0122 10:40:43.850759 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:40:43 crc kubenswrapper[4975]: I0122 10:40:43.906927 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:40:44 crc kubenswrapper[4975]: I0122 10:40:44.254759 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:40:44 crc kubenswrapper[4975]: I0122 10:40:44.330234 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:40:44 crc kubenswrapper[4975]: I0122 10:40:44.385638 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trqdn"] Jan 22 10:40:44 crc kubenswrapper[4975]: I0122 10:40:44.385946 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trqdn" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="registry-server" containerID="cri-o://fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358" gracePeriod=2 Jan 22 10:40:44 crc kubenswrapper[4975]: I0122 10:40:44.987127 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6p8c"] Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.264973 4975 generic.go:334] "Generic (PLEG): container finished" podID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerID="978c7fe042f46c8b6e5b8869c0d8e706c2cfb863ada61e1a411d84f70a84d6a7" exitCode=0 Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.265027 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx25r" event={"ID":"01529967-a2f6-4fc1-936b-e99200ece5b1","Type":"ContainerDied","Data":"978c7fe042f46c8b6e5b8869c0d8e706c2cfb863ada61e1a411d84f70a84d6a7"} Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.783428 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.807210 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjbxj\" (UniqueName: \"kubernetes.io/projected/01529967-a2f6-4fc1-936b-e99200ece5b1-kube-api-access-wjbxj\") pod \"01529967-a2f6-4fc1-936b-e99200ece5b1\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.807376 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-catalog-content\") pod \"01529967-a2f6-4fc1-936b-e99200ece5b1\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.807625 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-utilities\") pod \"01529967-a2f6-4fc1-936b-e99200ece5b1\" (UID: \"01529967-a2f6-4fc1-936b-e99200ece5b1\") " Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.809555 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-utilities" (OuterVolumeSpecName: "utilities") pod "01529967-a2f6-4fc1-936b-e99200ece5b1" (UID: "01529967-a2f6-4fc1-936b-e99200ece5b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.815486 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01529967-a2f6-4fc1-936b-e99200ece5b1-kube-api-access-wjbxj" (OuterVolumeSpecName: "kube-api-access-wjbxj") pod "01529967-a2f6-4fc1-936b-e99200ece5b1" (UID: "01529967-a2f6-4fc1-936b-e99200ece5b1"). InnerVolumeSpecName "kube-api-access-wjbxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.857609 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01529967-a2f6-4fc1-936b-e99200ece5b1" (UID: "01529967-a2f6-4fc1-936b-e99200ece5b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.909420 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.909467 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01529967-a2f6-4fc1-936b-e99200ece5b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:45 crc kubenswrapper[4975]: I0122 10:40:45.909483 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjbxj\" (UniqueName: \"kubernetes.io/projected/01529967-a2f6-4fc1-936b-e99200ece5b1-kube-api-access-wjbxj\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.169104 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.273362 4975 generic.go:334] "Generic (PLEG): container finished" podID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerID="fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358" exitCode=0 Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.273821 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trqdn" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.273989 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerDied","Data":"fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358"} Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.274069 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trqdn" event={"ID":"ac08af2a-139e-4898-9020-cb0f4cd0b1e1","Type":"ContainerDied","Data":"289ab4be85572844c962f818758940b06e1bc3c4abd849586c7588da5241905e"} Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.274101 4975 scope.go:117] "RemoveContainer" containerID="fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.277459 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx25r" event={"ID":"01529967-a2f6-4fc1-936b-e99200ece5b1","Type":"ContainerDied","Data":"66a8ba492dc4caa04ee044d9b1bf1d0668bfb84e77f62f6ec8dc4e6a44df1568"} Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.277658 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6p8c" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="registry-server" containerID="cri-o://df5d039de94e5b37f6b133a68a92d10ddff22e0079f96bd59e26f0bc13f8d4ff" gracePeriod=2 Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.277869 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx25r" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.297017 4975 scope.go:117] "RemoveContainer" containerID="99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.306865 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx25r"] Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.317469 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mx25r"] Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.337147 4975 scope.go:117] "RemoveContainer" containerID="dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.347753 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk6zv\" (UniqueName: \"kubernetes.io/projected/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-kube-api-access-sk6zv\") pod \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.347807 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-utilities\") pod \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.347835 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-catalog-content\") pod \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\" (UID: \"ac08af2a-139e-4898-9020-cb0f4cd0b1e1\") " Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.349209 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-utilities" (OuterVolumeSpecName: "utilities") pod "ac08af2a-139e-4898-9020-cb0f4cd0b1e1" (UID: "ac08af2a-139e-4898-9020-cb0f4cd0b1e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.353353 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-kube-api-access-sk6zv" (OuterVolumeSpecName: "kube-api-access-sk6zv") pod "ac08af2a-139e-4898-9020-cb0f4cd0b1e1" (UID: "ac08af2a-139e-4898-9020-cb0f4cd0b1e1"). InnerVolumeSpecName "kube-api-access-sk6zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.360568 4975 scope.go:117] "RemoveContainer" containerID="fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358" Jan 22 10:40:46 crc kubenswrapper[4975]: E0122 10:40:46.360891 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358\": container with ID starting with fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358 not found: ID does not exist" containerID="fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.360918 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358"} err="failed to get container status \"fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358\": rpc error: code = NotFound desc = could not find container \"fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358\": container with ID starting with fde9c00ca87841610730e3a5fd4730f583632f2c951339598654d0751a234358 not found: ID does not exist" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.360942 4975 scope.go:117] "RemoveContainer" containerID="99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef" Jan 22 10:40:46 crc kubenswrapper[4975]: E0122 10:40:46.361203 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef\": container with ID starting with 99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef not found: ID does not exist" containerID="99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.361222 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef"} err="failed to get container status \"99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef\": rpc error: code = NotFound desc = could not find container \"99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef\": container with ID starting with 99ad50224a74f003c014c12a707b02c0a42d0e65abe6c4d6eb3cf4a9bec31bef not found: ID does not exist" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.361235 4975 scope.go:117] "RemoveContainer" containerID="dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0" Jan 22 10:40:46 crc kubenswrapper[4975]: E0122 10:40:46.362568 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0\": container with ID starting with dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0 not found: ID does not exist" containerID="dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.362602 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0"} err="failed to get container status \"dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0\": rpc error: code = NotFound desc = could not find container \"dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0\": container with ID starting with dfc5401d17310e630609608001451344365036a7ac0935a9450fa37d06f58cc0 not found: ID does not exist" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.362658 4975 scope.go:117] "RemoveContainer" containerID="978c7fe042f46c8b6e5b8869c0d8e706c2cfb863ada61e1a411d84f70a84d6a7" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.397289 4975 scope.go:117] "RemoveContainer" containerID="9ca2c21aab5c98613ac311524c16a3c53d4718352064b3a255b51512cf1a3255" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.413126 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac08af2a-139e-4898-9020-cb0f4cd0b1e1" (UID: "ac08af2a-139e-4898-9020-cb0f4cd0b1e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.418052 4975 scope.go:117] "RemoveContainer" containerID="955893a5ed7f65869f2102646eb03ebaabdbcc7862f6d6b6acf377a655e000dc" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.449665 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk6zv\" (UniqueName: \"kubernetes.io/projected/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-kube-api-access-sk6zv\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.449708 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.449722 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac08af2a-139e-4898-9020-cb0f4cd0b1e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.624674 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trqdn"] Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.630605 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trqdn"] Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.789273 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfln6"] Jan 22 10:40:46 crc kubenswrapper[4975]: I0122 10:40:46.789876 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kfln6" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="registry-server" containerID="cri-o://2d27b4add0cde2a7cb7fe5fc45a6ae27dba1f011798e20dd147579774aa5980f" gracePeriod=2 Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.292456 4975 generic.go:334] "Generic (PLEG): container finished" podID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerID="2d27b4add0cde2a7cb7fe5fc45a6ae27dba1f011798e20dd147579774aa5980f" exitCode=0 Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.292581 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerDied","Data":"2d27b4add0cde2a7cb7fe5fc45a6ae27dba1f011798e20dd147579774aa5980f"} Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.295809 4975 generic.go:334] "Generic (PLEG): container finished" podID="cece8116-dff3-488a-8ab2-0f25754d0609" containerID="df5d039de94e5b37f6b133a68a92d10ddff22e0079f96bd59e26f0bc13f8d4ff" exitCode=0 Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.295864 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6p8c" event={"ID":"cece8116-dff3-488a-8ab2-0f25754d0609","Type":"ContainerDied","Data":"df5d039de94e5b37f6b133a68a92d10ddff22e0079f96bd59e26f0bc13f8d4ff"} Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.734541 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.769986 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" path="/var/lib/kubelet/pods/01529967-a2f6-4fc1-936b-e99200ece5b1/volumes" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.774198 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" path="/var/lib/kubelet/pods/ac08af2a-139e-4898-9020-cb0f4cd0b1e1/volumes" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.777377 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-utilities" (OuterVolumeSpecName: "utilities") pod "40aeb5cf-bc56-4893-bc39-c9abc3f9e304" (UID: "40aeb5cf-bc56-4893-bc39-c9abc3f9e304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.778192 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-utilities\") pod \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.778269 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-catalog-content\") pod \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.778907 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfmkk\" (UniqueName: \"kubernetes.io/projected/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-kube-api-access-xfmkk\") pod \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\" (UID: \"40aeb5cf-bc56-4893-bc39-c9abc3f9e304\") " Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.779256 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.783468 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-kube-api-access-xfmkk" (OuterVolumeSpecName: "kube-api-access-xfmkk") pod "40aeb5cf-bc56-4893-bc39-c9abc3f9e304" (UID: "40aeb5cf-bc56-4893-bc39-c9abc3f9e304"). InnerVolumeSpecName "kube-api-access-xfmkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.808162 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40aeb5cf-bc56-4893-bc39-c9abc3f9e304" (UID: "40aeb5cf-bc56-4893-bc39-c9abc3f9e304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.833022 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.880659 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-catalog-content\") pod \"cece8116-dff3-488a-8ab2-0f25754d0609\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.880770 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-utilities\") pod \"cece8116-dff3-488a-8ab2-0f25754d0609\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.880830 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4hzv\" (UniqueName: \"kubernetes.io/projected/cece8116-dff3-488a-8ab2-0f25754d0609-kube-api-access-s4hzv\") pod \"cece8116-dff3-488a-8ab2-0f25754d0609\" (UID: \"cece8116-dff3-488a-8ab2-0f25754d0609\") " Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.881093 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.881120 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfmkk\" (UniqueName: \"kubernetes.io/projected/40aeb5cf-bc56-4893-bc39-c9abc3f9e304-kube-api-access-xfmkk\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.883047 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-utilities" (OuterVolumeSpecName: "utilities") pod "cece8116-dff3-488a-8ab2-0f25754d0609" (UID: "cece8116-dff3-488a-8ab2-0f25754d0609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.885312 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cece8116-dff3-488a-8ab2-0f25754d0609-kube-api-access-s4hzv" (OuterVolumeSpecName: "kube-api-access-s4hzv") pod "cece8116-dff3-488a-8ab2-0f25754d0609" (UID: "cece8116-dff3-488a-8ab2-0f25754d0609"). InnerVolumeSpecName "kube-api-access-s4hzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.982308 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:47 crc kubenswrapper[4975]: I0122 10:40:47.982396 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4hzv\" (UniqueName: \"kubernetes.io/projected/cece8116-dff3-488a-8ab2-0f25754d0609-kube-api-access-s4hzv\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.051889 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cece8116-dff3-488a-8ab2-0f25754d0609" (UID: "cece8116-dff3-488a-8ab2-0f25754d0609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.084207 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cece8116-dff3-488a-8ab2-0f25754d0609-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.306296 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kfln6" event={"ID":"40aeb5cf-bc56-4893-bc39-c9abc3f9e304","Type":"ContainerDied","Data":"6c0db3e7d802ca7476cc55c25a6c8bfe022259fbc1c0060fa274cdce1f1febfe"} Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.306412 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kfln6" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.306843 4975 scope.go:117] "RemoveContainer" containerID="2d27b4add0cde2a7cb7fe5fc45a6ae27dba1f011798e20dd147579774aa5980f" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.309537 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6p8c" event={"ID":"cece8116-dff3-488a-8ab2-0f25754d0609","Type":"ContainerDied","Data":"497e1bc29196d003ddc1fe3cb62e6c43d19c6d17f9a6f23a4464eb3e744fa352"} Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.309704 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6p8c" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.334144 4975 scope.go:117] "RemoveContainer" containerID="16a3ec72da41f5a2d3cf8969518318718edd197760fb3f37cbf29c81fa5d196a" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.361925 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6p8c"] Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.373052 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6p8c"] Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.378917 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfln6"] Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.381469 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kfln6"] Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.383831 4975 scope.go:117] "RemoveContainer" containerID="4448d934175e8e9c33221302b0c02ee94f774b1b61b693d0039521857ab02508" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.404092 4975 scope.go:117] "RemoveContainer" containerID="df5d039de94e5b37f6b133a68a92d10ddff22e0079f96bd59e26f0bc13f8d4ff" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.424078 4975 scope.go:117] "RemoveContainer" containerID="9d81a18cf061484f173d9a29796012eeb1249585fd419607a04e41770ce75aee" Jan 22 10:40:48 crc kubenswrapper[4975]: I0122 10:40:48.440549 4975 scope.go:117] "RemoveContainer" containerID="01169094dc43575fbab5c56517c0b9b3eaa3855d1e990828ef068fdefc2afedc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.363317 4975 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.363824 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589" gracePeriod=15 Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.363869 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5" gracePeriod=15 Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.363922 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9" gracePeriod=15 Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.363956 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232" gracePeriod=15 Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.364116 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77" gracePeriod=15 Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.364645 4975 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.364935 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.364950 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.364962 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.364970 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365001 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365009 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365022 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365030 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365040 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365049 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365080 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365089 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365102 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365110 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365121 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365129 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365161 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365170 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365181 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365189 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365205 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365214 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365245 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365253 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365264 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365272 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365285 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365293 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365323 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365347 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365360 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365368 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365401 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365410 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="extract-content" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365422 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365430 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.365441 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365449 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="extract-utilities" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365603 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365619 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365654 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365663 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365676 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365684 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="01529967-a2f6-4fc1-936b-e99200ece5b1" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365696 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac08af2a-139e-4898-9020-cb0f4cd0b1e1" containerName="registry-server" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365728 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365740 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.365753 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.367785 4975 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.368475 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.375852 4975 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.404393 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.404545 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.404610 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.404661 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.404727 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.404812 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.405294 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.405416 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.413361 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513407 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513716 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513756 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513815 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513839 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513858 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513884 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.513923 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514007 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514099 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514129 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514158 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514186 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514212 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514240 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.514267 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.555573 4975 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.555741 4975 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.555885 4975 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.556025 4975 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.556186 4975 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.556208 4975 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.556353 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.709939 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:40:49 crc kubenswrapper[4975]: W0122 10:40:49.744134 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-08ac01e618b11c0781ac8c86d6939020d0357537b3193fbef198da2573ab29e6 WatchSource:0}: Error finding container 08ac01e618b11c0781ac8c86d6939020d0357537b3193fbef198da2573ab29e6: Status 404 returned error can't find the container with id 08ac01e618b11c0781ac8c86d6939020d0357537b3193fbef198da2573ab29e6 Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.748069 4975 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d077a20fcbb7c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:40:49.747311484 +0000 UTC m=+242.405347634,LastTimestamp:2026-01-22 10:40:49.747311484 +0000 UTC m=+242.405347634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 10:40:49 crc kubenswrapper[4975]: E0122 10:40:49.757676 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.770957 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40aeb5cf-bc56-4893-bc39-c9abc3f9e304" path="/var/lib/kubelet/pods/40aeb5cf-bc56-4893-bc39-c9abc3f9e304/volumes" Jan 22 10:40:49 crc kubenswrapper[4975]: I0122 10:40:49.772138 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cece8116-dff3-488a-8ab2-0f25754d0609" path="/var/lib/kubelet/pods/cece8116-dff3-488a-8ab2-0f25754d0609/volumes" Jan 22 10:40:50 crc kubenswrapper[4975]: E0122 10:40:50.159091 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.347216 4975 generic.go:334] "Generic (PLEG): container finished" podID="a0877a81-c73b-4269-b7d4-14834d43e46e" containerID="76175fb6195a1498e121cecd59ffefd4b2a32e0d835c7f9484546abe97ff2357" exitCode=0 Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.347399 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a0877a81-c73b-4269-b7d4-14834d43e46e","Type":"ContainerDied","Data":"76175fb6195a1498e121cecd59ffefd4b2a32e0d835c7f9484546abe97ff2357"} Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.348844 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.349690 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.352483 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.355001 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.356456 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5" exitCode=0 Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.356498 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232" exitCode=0 Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.356516 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9" exitCode=0 Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.356531 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77" exitCode=2 Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.356518 4975 scope.go:117] "RemoveContainer" containerID="dacce521d78196dd158afb73f6af7e8dda0905e3ceaaed5d59afda7f98a422c3" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.359189 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3b66f8daa847d7df9dfa093ba5ded3a347c90f8cf86f126ff11c7424194d1aae"} Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.359255 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"08ac01e618b11c0781ac8c86d6939020d0357537b3193fbef198da2573ab29e6"} Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.360145 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:50 crc kubenswrapper[4975]: I0122 10:40:50.360723 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:50 crc kubenswrapper[4975]: E0122 10:40:50.960843 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.369750 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.783461 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.784711 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.785229 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.790163 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.791148 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.791546 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.791932 4975 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.792212 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.850986 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851094 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851130 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-kubelet-dir\") pod \"a0877a81-c73b-4269-b7d4-14834d43e46e\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851164 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851191 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a0877a81-c73b-4269-b7d4-14834d43e46e" (UID: "a0877a81-c73b-4269-b7d4-14834d43e46e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851208 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851240 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851256 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851310 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-var-lock\") pod \"a0877a81-c73b-4269-b7d4-14834d43e46e\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851365 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0877a81-c73b-4269-b7d4-14834d43e46e-kube-api-access\") pod \"a0877a81-c73b-4269-b7d4-14834d43e46e\" (UID: \"a0877a81-c73b-4269-b7d4-14834d43e46e\") " Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851413 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a0877a81-c73b-4269-b7d4-14834d43e46e" (UID: "a0877a81-c73b-4269-b7d4-14834d43e46e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851741 4975 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851758 4975 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851766 4975 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851773 4975 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a0877a81-c73b-4269-b7d4-14834d43e46e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.851781 4975 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.863111 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0877a81-c73b-4269-b7d4-14834d43e46e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a0877a81-c73b-4269-b7d4-14834d43e46e" (UID: "a0877a81-c73b-4269-b7d4-14834d43e46e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:51 crc kubenswrapper[4975]: I0122 10:40:51.952973 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0877a81-c73b-4269-b7d4-14834d43e46e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.381231 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.382090 4975 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589" exitCode=0 Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.382256 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.382316 4975 scope.go:117] "RemoveContainer" containerID="193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.385649 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a0877a81-c73b-4269-b7d4-14834d43e46e","Type":"ContainerDied","Data":"3fd66d1d1c415f2dd7ee5728670ec1808945096799f7e9c807499addbc947f1e"} Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.385702 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd66d1d1c415f2dd7ee5728670ec1808945096799f7e9c807499addbc947f1e" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.385774 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.399529 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.399938 4975 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.400723 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.407946 4975 scope.go:117] "RemoveContainer" containerID="17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.419673 4975 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.420068 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.420595 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.431957 4975 scope.go:117] "RemoveContainer" containerID="197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.465309 4975 scope.go:117] "RemoveContainer" containerID="1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.495014 4975 scope.go:117] "RemoveContainer" containerID="e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.532758 4975 scope.go:117] "RemoveContainer" containerID="5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.561928 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="3.2s" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.563588 4975 scope.go:117] "RemoveContainer" containerID="193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.564086 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\": container with ID starting with 193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5 not found: ID does not exist" containerID="193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.564125 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5"} err="failed to get container status \"193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\": rpc error: code = NotFound desc = could not find container \"193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5\": container with ID starting with 193289101b2b16b14d4abbd2cafcd0187ad024c51a442bca737a56df17ef24a5 not found: ID does not exist" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.564154 4975 scope.go:117] "RemoveContainer" containerID="17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.564686 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\": container with ID starting with 17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232 not found: ID does not exist" containerID="17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.564712 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232"} err="failed to get container status \"17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\": rpc error: code = NotFound desc = could not find container \"17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232\": container with ID starting with 17c19053e7b20a6e2651ba9cd69104b8cce0dc0da1cf0bf70494591db569a232 not found: ID does not exist" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.564731 4975 scope.go:117] "RemoveContainer" containerID="197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.565139 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\": container with ID starting with 197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9 not found: ID does not exist" containerID="197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.565170 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9"} err="failed to get container status \"197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\": rpc error: code = NotFound desc = could not find container \"197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9\": container with ID starting with 197c3406e0eabb4ba9f0636035ee0c25c1d4bed6c24667299919f469a87e6ef9 not found: ID does not exist" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.565192 4975 scope.go:117] "RemoveContainer" containerID="1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.565795 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\": container with ID starting with 1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77 not found: ID does not exist" containerID="1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.565860 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77"} err="failed to get container status \"1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\": rpc error: code = NotFound desc = could not find container \"1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77\": container with ID starting with 1859d25fcb2f5aeaab99c1113f51cd0f31db642e0e024b0bf00685a5671cdb77 not found: ID does not exist" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.565908 4975 scope.go:117] "RemoveContainer" containerID="e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.566146 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\": container with ID starting with e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589 not found: ID does not exist" containerID="e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.566171 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589"} err="failed to get container status \"e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\": rpc error: code = NotFound desc = could not find container \"e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589\": container with ID starting with e0ec638669b0ad1686f7bf89712a23259bf131293e7c0fa0728f15406b46f589 not found: ID does not exist" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.566189 4975 scope.go:117] "RemoveContainer" containerID="5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69" Jan 22 10:40:52 crc kubenswrapper[4975]: E0122 10:40:52.567056 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\": container with ID starting with 5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69 not found: ID does not exist" containerID="5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69" Jan 22 10:40:52 crc kubenswrapper[4975]: I0122 10:40:52.567084 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69"} err="failed to get container status \"5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\": rpc error: code = NotFound desc = could not find container \"5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69\": container with ID starting with 5bdba00198ddabac2874ab8d0894489b8b2ec04547c34a6f456fce251778fe69 not found: ID does not exist" Jan 22 10:40:53 crc kubenswrapper[4975]: I0122 10:40:53.766443 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 10:40:55 crc kubenswrapper[4975]: E0122 10:40:55.762632 4975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="6.4s" Jan 22 10:40:55 crc kubenswrapper[4975]: E0122 10:40:55.890762 4975 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d077a20fcbb7c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:40:49.747311484 +0000 UTC m=+242.405347634,LastTimestamp:2026-01-22 10:40:49.747311484 +0000 UTC m=+242.405347634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 10:40:57 crc kubenswrapper[4975]: I0122 10:40:57.761324 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:57 crc kubenswrapper[4975]: I0122 10:40:57.761960 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:58 crc kubenswrapper[4975]: I0122 10:40:58.991401 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" containerName="oauth-openshift" containerID="cri-o://a3b160155690ed814c14af8a7bf354dabb1e670090a3ec3f76c86419c439f5dd" gracePeriod=15 Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.441901 4975 generic.go:334] "Generic (PLEG): container finished" podID="20e6fd86-f979-409b-9577-6ce3a00ff73d" containerID="a3b160155690ed814c14af8a7bf354dabb1e670090a3ec3f76c86419c439f5dd" exitCode=0 Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.441999 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" event={"ID":"20e6fd86-f979-409b-9577-6ce3a00ff73d","Type":"ContainerDied","Data":"a3b160155690ed814c14af8a7bf354dabb1e670090a3ec3f76c86419c439f5dd"} Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.442226 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" event={"ID":"20e6fd86-f979-409b-9577-6ce3a00ff73d","Type":"ContainerDied","Data":"d43b83c4c79c000becbe9db8b21ea0c9f777226a6fe22783c94ba205edab3c8c"} Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.442245 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43b83c4c79c000becbe9db8b21ea0c9f777226a6fe22783c94ba205edab3c8c" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.476828 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.477534 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.477983 4975 status_manager.go:851] "Failed to get status for pod" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7fpw\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.478392 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555017 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-session\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555090 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-ocp-branding-template\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555129 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-trusted-ca-bundle\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555183 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-idp-0-file-data\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555230 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-dir\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555263 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-error\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555316 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-provider-selection\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555395 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555416 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-service-ca\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555510 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-login\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555559 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-serving-cert\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555585 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-cliconfig\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555680 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-policies\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555718 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-router-certs\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.555745 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjhc\" (UniqueName: \"kubernetes.io/projected/20e6fd86-f979-409b-9577-6ce3a00ff73d-kube-api-access-xdjhc\") pod \"20e6fd86-f979-409b-9577-6ce3a00ff73d\" (UID: \"20e6fd86-f979-409b-9577-6ce3a00ff73d\") " Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.556117 4975 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.556302 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.556319 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.557722 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.558041 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.563077 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e6fd86-f979-409b-9577-6ce3a00ff73d-kube-api-access-xdjhc" (OuterVolumeSpecName: "kube-api-access-xdjhc") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "kube-api-access-xdjhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.563303 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.563801 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.564288 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.565514 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.566159 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.566239 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.566457 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.567109 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "20e6fd86-f979-409b-9577-6ce3a00ff73d" (UID: "20e6fd86-f979-409b-9577-6ce3a00ff73d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658314 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658426 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658451 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658471 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658496 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658515 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658535 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658554 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658572 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658590 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658609 4975 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e6fd86-f979-409b-9577-6ce3a00ff73d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658626 4975 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20e6fd86-f979-409b-9577-6ce3a00ff73d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:59 crc kubenswrapper[4975]: I0122 10:40:59.658644 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjhc\" (UniqueName: \"kubernetes.io/projected/20e6fd86-f979-409b-9577-6ce3a00ff73d-kube-api-access-xdjhc\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.448104 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.449287 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.450044 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.450662 4975 status_manager.go:851] "Failed to get status for pod" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7fpw\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.454037 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.454534 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.454964 4975 status_manager.go:851] "Failed to get status for pod" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7fpw\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.754586 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.755983 4975 status_manager.go:851] "Failed to get status for pod" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7fpw\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.756458 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.756963 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.771280 4975 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.771322 4975 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:00 crc kubenswrapper[4975]: E0122 10:41:00.771915 4975 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:00 crc kubenswrapper[4975]: I0122 10:41:00.772544 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:00 crc kubenswrapper[4975]: W0122 10:41:00.839628 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ff7804c58eac6bddb96503ee0ec972fc137c12057b89278c6cdffab52b3ad9c4 WatchSource:0}: Error finding container ff7804c58eac6bddb96503ee0ec972fc137c12057b89278c6cdffab52b3ad9c4: Status 404 returned error can't find the container with id ff7804c58eac6bddb96503ee0ec972fc137c12057b89278c6cdffab52b3ad9c4 Jan 22 10:41:00 crc kubenswrapper[4975]: E0122 10:41:00.858118 4975 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" volumeName="registry-storage" Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.457228 4975 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="30422036b3ddabfa29d28854f4afb12d2f538357a7913b917199ba2a2cb8e4b4" exitCode=0 Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.457312 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"30422036b3ddabfa29d28854f4afb12d2f538357a7913b917199ba2a2cb8e4b4"} Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.457695 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff7804c58eac6bddb96503ee0ec972fc137c12057b89278c6cdffab52b3ad9c4"} Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.458667 4975 status_manager.go:851] "Failed to get status for pod" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" pod="openshift-authentication/oauth-openshift-558db77b4-c7fpw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7fpw\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.458727 4975 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.458753 4975 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.459240 4975 status_manager.go:851] "Failed to get status for pod" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:01 crc kubenswrapper[4975]: E0122 10:41:01.459290 4975 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:01 crc kubenswrapper[4975]: I0122 10:41:01.460011 4975 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.470979 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.471323 4975 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f" exitCode=1 Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.471368 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f"} Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.471935 4975 scope.go:117] "RemoveContainer" containerID="2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f" Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.483798 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"04ac33401ce110e59fe5a320c6d6ce607e917dc404cdd43aaa5672b3fe205c66"} Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.483845 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cdc2803d910fc5b9d802cab5a9d8e3232c77ef5a5d410e97e5696b4e7dbb9f61"} Jan 22 10:41:02 crc kubenswrapper[4975]: I0122 10:41:02.483859 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec06cbd1b0e9ee2f14f9ec5a5dbe7d7c3c5d4c4927af954eb6da4d96a0a96218"} Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.495070 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b38e40327735066cf23f608438a333b3358bd33792ab957d9f322c390b81591"} Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.495323 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1708ed9a07efb4931f683ebfaef48aaa8bd4d6bfa17207e0773ff834b1c5e70"} Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.495634 4975 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.495652 4975 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.495865 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.498455 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 10:41:03 crc kubenswrapper[4975]: I0122 10:41:03.498493 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e892ab9ec158b1521adb9be76f56f8c0ae417c3166c7353d87e90373ddb129a"} Jan 22 10:41:04 crc kubenswrapper[4975]: I0122 10:41:04.084001 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:41:04 crc kubenswrapper[4975]: I0122 10:41:04.084676 4975 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 10:41:04 crc kubenswrapper[4975]: I0122 10:41:04.084757 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 10:41:04 crc kubenswrapper[4975]: I0122 10:41:04.818222 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:41:05 crc kubenswrapper[4975]: I0122 10:41:05.778442 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:05 crc kubenswrapper[4975]: I0122 10:41:05.778475 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:05 crc kubenswrapper[4975]: I0122 10:41:05.782714 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:08 crc kubenswrapper[4975]: I0122 10:41:08.525126 4975 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:08 crc kubenswrapper[4975]: I0122 10:41:08.645852 4975 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b50c5fcd-70f9-4ea6-b80e-d52d6f458774" Jan 22 10:41:09 crc kubenswrapper[4975]: I0122 10:41:09.543896 4975 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:09 crc kubenswrapper[4975]: I0122 10:41:09.543942 4975 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:09 crc kubenswrapper[4975]: I0122 10:41:09.547981 4975 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b50c5fcd-70f9-4ea6-b80e-d52d6f458774" Jan 22 10:41:14 crc kubenswrapper[4975]: I0122 10:41:14.083998 4975 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 10:41:14 crc kubenswrapper[4975]: I0122 10:41:14.084406 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 10:41:15 crc kubenswrapper[4975]: I0122 10:41:15.573994 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 10:41:16 crc kubenswrapper[4975]: I0122 10:41:16.217223 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 10:41:18 crc kubenswrapper[4975]: I0122 10:41:18.986811 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 10:41:19 crc kubenswrapper[4975]: I0122 10:41:19.741432 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 10:41:19 crc kubenswrapper[4975]: I0122 10:41:19.872411 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 10:41:20 crc kubenswrapper[4975]: I0122 10:41:20.375016 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 10:41:20 crc kubenswrapper[4975]: I0122 10:41:20.478784 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 10:41:20 crc kubenswrapper[4975]: I0122 10:41:20.630970 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 10:41:20 crc kubenswrapper[4975]: I0122 10:41:20.675097 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 10:41:20 crc kubenswrapper[4975]: I0122 10:41:20.771741 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.003775 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.048393 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.149829 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.301708 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.425539 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.447817 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.543016 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.568067 4975 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.877745 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.904509 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:41:21 crc kubenswrapper[4975]: I0122 10:41:21.959809 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 10:41:22 crc kubenswrapper[4975]: I0122 10:41:22.266773 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 10:41:22 crc kubenswrapper[4975]: I0122 10:41:22.417179 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 10:41:22 crc kubenswrapper[4975]: I0122 10:41:22.993868 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.066233 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.121924 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.139196 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.363780 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.484898 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.575136 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.630887 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.670734 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.749522 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.904626 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.928729 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 10:41:23 crc kubenswrapper[4975]: I0122 10:41:23.973114 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.065156 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.073875 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.083711 4975 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.083802 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.083878 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.084934 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9e892ab9ec158b1521adb9be76f56f8c0ae417c3166c7353d87e90373ddb129a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.085195 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://9e892ab9ec158b1521adb9be76f56f8c0ae417c3166c7353d87e90373ddb129a" gracePeriod=30 Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.139840 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.289609 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.316911 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.335586 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.375315 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.400162 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.428767 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.443458 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.456532 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.514731 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.545227 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.683983 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.843479 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 10:41:24 crc kubenswrapper[4975]: I0122 10:41:24.885761 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.032671 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.052313 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.058579 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.059967 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.098783 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.125593 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.150012 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.215669 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.234545 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.271969 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.275040 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.353914 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.545208 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.551479 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.590798 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.750462 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.817106 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.847650 4975 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.892939 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.910381 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.912112 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.930445 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 10:41:25 crc kubenswrapper[4975]: I0122 10:41:25.949874 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.016901 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.046023 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.068598 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.080682 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.090885 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.126367 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.169838 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.312087 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.324594 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.337384 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.391358 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.459588 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.515322 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.534012 4975 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.544000 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.757689 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.809587 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.815908 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.847833 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.870853 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 10:41:26 crc kubenswrapper[4975]: I0122 10:41:26.993772 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.002125 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.022033 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.064995 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.082281 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.145609 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.223666 4975 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.294401 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.336526 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.471750 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.489010 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.523868 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.632816 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.680693 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.683159 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.688524 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.716597 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.728006 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.728711 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.770965 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.774500 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.776788 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 10:41:27 crc kubenswrapper[4975]: I0122 10:41:27.972414 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.022420 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.361437 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.384867 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.411862 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.465565 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.783032 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.808928 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.809061 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.851772 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.871262 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.927655 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.955546 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.955904 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:41:28 crc kubenswrapper[4975]: I0122 10:41:28.960652 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.016319 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.057886 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.076559 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.143871 4975 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.149710 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.149683441 podStartE2EDuration="40.149683441s" podCreationTimestamp="2026-01-22 10:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:41:08.602454694 +0000 UTC m=+261.260490814" watchObservedRunningTime="2026-01-22 10:41:29.149683441 +0000 UTC m=+281.807719591" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.151299 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7fpw","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.151569 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5477954dc8-n6qzz"] Jan 22 10:41:29 crc kubenswrapper[4975]: E0122 10:41:29.151878 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" containerName="installer" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.151905 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" containerName="installer" Jan 22 10:41:29 crc kubenswrapper[4975]: E0122 10:41:29.151927 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" containerName="oauth-openshift" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.151940 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" containerName="oauth-openshift" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.152131 4975 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.152156 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" containerName="oauth-openshift" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.152174 4975 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01133b76-3a28-4517-98fd-2526adefd865" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.152194 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0877a81-c73b-4269-b7d4-14834d43e46e" containerName="installer" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.152863 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.163891 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.164277 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.164549 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.164751 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.164945 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.167132 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.169914 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.170237 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.170556 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.170818 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.189093 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.189986 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.191847 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.192442 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.196119 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.202722 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.218018 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.241958 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.241932315 podStartE2EDuration="21.241932315s" podCreationTimestamp="2026-01-22 10:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:41:29.239869794 +0000 UTC m=+281.897905914" watchObservedRunningTime="2026-01-22 10:41:29.241932315 +0000 UTC m=+281.899968435" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.249943 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.271960 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272048 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272092 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/246e6ccc-8690-43e8-984f-cdc9c202e700-audit-dir\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272362 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-audit-policies\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272439 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4bm\" (UniqueName: \"kubernetes.io/projected/246e6ccc-8690-43e8-984f-cdc9c202e700-kube-api-access-qc4bm\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272564 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272654 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272693 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272718 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272769 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272804 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272851 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272891 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.272923 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.282079 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.309026 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.371809 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.373876 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/246e6ccc-8690-43e8-984f-cdc9c202e700-audit-dir\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.373928 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-audit-policies\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.373956 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4bm\" (UniqueName: \"kubernetes.io/projected/246e6ccc-8690-43e8-984f-cdc9c202e700-kube-api-access-qc4bm\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.373980 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374020 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374053 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374077 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374107 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374040 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/246e6ccc-8690-43e8-984f-cdc9c202e700-audit-dir\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374134 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374263 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374320 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374421 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374465 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374608 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374814 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.374829 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-audit-policies\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.375480 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.375833 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.379967 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.380455 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.380870 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.380998 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.381555 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.384946 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.387803 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.389213 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/246e6ccc-8690-43e8-984f-cdc9c202e700-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.390848 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4bm\" (UniqueName: \"kubernetes.io/projected/246e6ccc-8690-43e8-984f-cdc9c202e700-kube-api-access-qc4bm\") pod \"oauth-openshift-5477954dc8-n6qzz\" (UID: \"246e6ccc-8690-43e8-984f-cdc9c202e700\") " pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.474326 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.513749 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.532140 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.576146 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.661700 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.664797 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.712219 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.762238 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e6fd86-f979-409b-9577-6ce3a00ff73d" path="/var/lib/kubelet/pods/20e6fd86-f979-409b-9577-6ce3a00ff73d/volumes" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.833567 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.872719 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 10:41:29 crc kubenswrapper[4975]: I0122 10:41:29.941945 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.012996 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.133501 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.304518 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.307536 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.407989 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.442373 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.475086 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.556670 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.622847 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.697605 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.701542 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.704198 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.815544 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.828899 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.830858 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.834883 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 10:41:30 crc kubenswrapper[4975]: I0122 10:41:30.998317 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.222608 4975 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.223199 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3b66f8daa847d7df9dfa093ba5ded3a347c90f8cf86f126ff11c7424194d1aae" gracePeriod=5 Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.326412 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5477954dc8-n6qzz"] Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.435704 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.474541 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.556363 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.574392 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.682552 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.690096 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" event={"ID":"246e6ccc-8690-43e8-984f-cdc9c202e700","Type":"ContainerStarted","Data":"643d480018fcffa48795e5c1e81753c73ba364db2dafae2f54a2d50e5c96ebd1"} Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.690176 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" event={"ID":"246e6ccc-8690-43e8-984f-cdc9c202e700","Type":"ContainerStarted","Data":"1c961e125d4bb5930fddc7a06baff864875cf85e13fe515b01381c137f7bb2f7"} Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.690674 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.691515 4975 patch_prober.go:28] interesting pod/oauth-openshift-5477954dc8-n6qzz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.691561 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" podUID="246e6ccc-8690-43e8-984f-cdc9c202e700" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.777647 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.819644 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 10:41:31 crc kubenswrapper[4975]: I0122 10:41:31.907409 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.023451 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.082048 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.097999 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.178084 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.205258 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.260197 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.315682 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.338560 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.469557 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.474168 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.517792 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.574658 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.585938 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.647395 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.674918 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.694718 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.713039 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.744094 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5477954dc8-n6qzz" podStartSLOduration=58.744075059 podStartE2EDuration="58.744075059s" podCreationTimestamp="2026-01-22 10:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:41:31.723769777 +0000 UTC m=+284.381805947" watchObservedRunningTime="2026-01-22 10:41:32.744075059 +0000 UTC m=+285.402111179" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.806704 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.847771 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.937915 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 10:41:32 crc kubenswrapper[4975]: I0122 10:41:32.989713 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 10:41:33 crc kubenswrapper[4975]: I0122 10:41:33.147375 4975 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 10:41:33 crc kubenswrapper[4975]: I0122 10:41:33.225521 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 10:41:33 crc kubenswrapper[4975]: I0122 10:41:33.301819 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 10:41:33 crc kubenswrapper[4975]: I0122 10:41:33.567890 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 10:41:33 crc kubenswrapper[4975]: I0122 10:41:33.622216 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 10:41:33 crc kubenswrapper[4975]: I0122 10:41:33.902552 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.003486 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.033510 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.061318 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.232446 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.342264 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.354836 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.502820 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.503209 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.567142 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.576306 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.685076 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.826287 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 10:41:34 crc kubenswrapper[4975]: I0122 10:41:34.981606 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.045572 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.071748 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.100586 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.247199 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.364850 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.617898 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.690072 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.725788 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.768302 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.804456 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.964828 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 10:41:35 crc kubenswrapper[4975]: I0122 10:41:35.982208 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.128550 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.187222 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.352492 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.596172 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.632704 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.726760 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.726851 4975 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3b66f8daa847d7df9dfa093ba5ded3a347c90f8cf86f126ff11c7424194d1aae" exitCode=137 Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.768409 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.837037 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.837491 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882293 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882427 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882467 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882490 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882538 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882546 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882570 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882570 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882672 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882822 4975 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882835 4975 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882845 4975 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.882853 4975 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.891170 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:41:36 crc kubenswrapper[4975]: I0122 10:41:36.983514 4975 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.174131 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.263676 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.631644 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.735155 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.735262 4975 scope.go:117] "RemoveContainer" containerID="3b66f8daa847d7df9dfa093ba5ded3a347c90f8cf86f126ff11c7424194d1aae" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.735467 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.779158 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.779493 4975 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.792756 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.792821 4975 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="907031ae-8e72-4127-b3bb-cd0c2d57b82a" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.795910 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.795954 4975 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="907031ae-8e72-4127-b3bb-cd0c2d57b82a" Jan 22 10:41:37 crc kubenswrapper[4975]: I0122 10:41:37.959258 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 10:41:38 crc kubenswrapper[4975]: I0122 10:41:38.625444 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 10:41:38 crc kubenswrapper[4975]: I0122 10:41:38.651050 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 10:41:47 crc kubenswrapper[4975]: I0122 10:41:47.599905 4975 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 10:41:51 crc kubenswrapper[4975]: I0122 10:41:51.842178 4975 generic.go:334] "Generic (PLEG): container finished" podID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerID="c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4" exitCode=0 Jan 22 10:41:51 crc kubenswrapper[4975]: I0122 10:41:51.842311 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" event={"ID":"48c3f95e-78eb-4e10-bd32-4c4644190f01","Type":"ContainerDied","Data":"c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4"} Jan 22 10:41:51 crc kubenswrapper[4975]: I0122 10:41:51.843325 4975 scope.go:117] "RemoveContainer" containerID="c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4" Jan 22 10:41:52 crc kubenswrapper[4975]: I0122 10:41:52.864408 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" event={"ID":"48c3f95e-78eb-4e10-bd32-4c4644190f01","Type":"ContainerStarted","Data":"9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d"} Jan 22 10:41:52 crc kubenswrapper[4975]: I0122 10:41:52.864881 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:41:52 crc kubenswrapper[4975]: I0122 10:41:52.867469 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:41:54 crc kubenswrapper[4975]: I0122 10:41:54.884196 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 22 10:41:54 crc kubenswrapper[4975]: I0122 10:41:54.886858 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 10:41:54 crc kubenswrapper[4975]: I0122 10:41:54.886926 4975 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9e892ab9ec158b1521adb9be76f56f8c0ae417c3166c7353d87e90373ddb129a" exitCode=137 Jan 22 10:41:54 crc kubenswrapper[4975]: I0122 10:41:54.887422 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9e892ab9ec158b1521adb9be76f56f8c0ae417c3166c7353d87e90373ddb129a"} Jan 22 10:41:54 crc kubenswrapper[4975]: I0122 10:41:54.887465 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4af170eb2f4daaa3cb2a202c254ab2223e11df2f4c2fe0e65f54d3794f17802c"} Jan 22 10:41:54 crc kubenswrapper[4975]: I0122 10:41:54.887487 4975 scope.go:117] "RemoveContainer" containerID="2db343cf5cb26a8c8936dc2b8d150d527a73df3d4d0cf6c620e1397594b7927f" Jan 22 10:41:55 crc kubenswrapper[4975]: I0122 10:41:55.896854 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 22 10:42:04 crc kubenswrapper[4975]: I0122 10:42:04.083390 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:42:04 crc kubenswrapper[4975]: I0122 10:42:04.090662 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:42:04 crc kubenswrapper[4975]: I0122 10:42:04.818227 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:42:04 crc kubenswrapper[4975]: I0122 10:42:04.824476 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.167174 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4"] Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.167741 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" podUID="20dff28e-fc21-42b9-9c2a-1ee8588ed83e" containerName="route-controller-manager" containerID="cri-o://c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a" gracePeriod=30 Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.188225 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck7qg"] Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.188682 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" podUID="128e6487-d288-47e9-92db-491404d8ec40" containerName="controller-manager" containerID="cri-o://ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5" gracePeriod=30 Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.579432 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.592162 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679206 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-proxy-ca-bundles\") pod \"128e6487-d288-47e9-92db-491404d8ec40\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679306 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-client-ca\") pod \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679342 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128e6487-d288-47e9-92db-491404d8ec40-serving-cert\") pod \"128e6487-d288-47e9-92db-491404d8ec40\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679366 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-config\") pod \"128e6487-d288-47e9-92db-491404d8ec40\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679384 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgcmb\" (UniqueName: \"kubernetes.io/projected/128e6487-d288-47e9-92db-491404d8ec40-kube-api-access-fgcmb\") pod \"128e6487-d288-47e9-92db-491404d8ec40\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679400 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-serving-cert\") pod \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679417 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-client-ca\") pod \"128e6487-d288-47e9-92db-491404d8ec40\" (UID: \"128e6487-d288-47e9-92db-491404d8ec40\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679441 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp6h4\" (UniqueName: \"kubernetes.io/projected/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-kube-api-access-pp6h4\") pod \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.679487 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-config\") pod \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\" (UID: \"20dff28e-fc21-42b9-9c2a-1ee8588ed83e\") " Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.680216 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-client-ca" (OuterVolumeSpecName: "client-ca") pod "20dff28e-fc21-42b9-9c2a-1ee8588ed83e" (UID: "20dff28e-fc21-42b9-9c2a-1ee8588ed83e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.680238 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-client-ca" (OuterVolumeSpecName: "client-ca") pod "128e6487-d288-47e9-92db-491404d8ec40" (UID: "128e6487-d288-47e9-92db-491404d8ec40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.680262 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-config" (OuterVolumeSpecName: "config") pod "128e6487-d288-47e9-92db-491404d8ec40" (UID: "128e6487-d288-47e9-92db-491404d8ec40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.680807 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "128e6487-d288-47e9-92db-491404d8ec40" (UID: "128e6487-d288-47e9-92db-491404d8ec40"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.681046 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-config" (OuterVolumeSpecName: "config") pod "20dff28e-fc21-42b9-9c2a-1ee8588ed83e" (UID: "20dff28e-fc21-42b9-9c2a-1ee8588ed83e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.685321 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20dff28e-fc21-42b9-9c2a-1ee8588ed83e" (UID: "20dff28e-fc21-42b9-9c2a-1ee8588ed83e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.685474 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128e6487-d288-47e9-92db-491404d8ec40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "128e6487-d288-47e9-92db-491404d8ec40" (UID: "128e6487-d288-47e9-92db-491404d8ec40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.685525 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-kube-api-access-pp6h4" (OuterVolumeSpecName: "kube-api-access-pp6h4") pod "20dff28e-fc21-42b9-9c2a-1ee8588ed83e" (UID: "20dff28e-fc21-42b9-9c2a-1ee8588ed83e"). InnerVolumeSpecName "kube-api-access-pp6h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.685733 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128e6487-d288-47e9-92db-491404d8ec40-kube-api-access-fgcmb" (OuterVolumeSpecName: "kube-api-access-fgcmb") pod "128e6487-d288-47e9-92db-491404d8ec40" (UID: "128e6487-d288-47e9-92db-491404d8ec40"). InnerVolumeSpecName "kube-api-access-fgcmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785043 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785360 4975 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785425 4975 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785436 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128e6487-d288-47e9-92db-491404d8ec40-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785446 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785454 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgcmb\" (UniqueName: \"kubernetes.io/projected/128e6487-d288-47e9-92db-491404d8ec40-kube-api-access-fgcmb\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785483 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785492 4975 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/128e6487-d288-47e9-92db-491404d8ec40-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:20 crc kubenswrapper[4975]: I0122 10:42:20.785500 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp6h4\" (UniqueName: \"kubernetes.io/projected/20dff28e-fc21-42b9-9c2a-1ee8588ed83e-kube-api-access-pp6h4\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.068764 4975 generic.go:334] "Generic (PLEG): container finished" podID="20dff28e-fc21-42b9-9c2a-1ee8588ed83e" containerID="c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a" exitCode=0 Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.068837 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" event={"ID":"20dff28e-fc21-42b9-9c2a-1ee8588ed83e","Type":"ContainerDied","Data":"c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a"} Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.068865 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" event={"ID":"20dff28e-fc21-42b9-9c2a-1ee8588ed83e","Type":"ContainerDied","Data":"bbda8ac4037fad87b5bf8be7d27620ab0b6a40b1d52a886ff3ed12859a03cce6"} Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.068881 4975 scope.go:117] "RemoveContainer" containerID="c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.068983 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.071239 4975 generic.go:334] "Generic (PLEG): container finished" podID="128e6487-d288-47e9-92db-491404d8ec40" containerID="ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5" exitCode=0 Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.071290 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" event={"ID":"128e6487-d288-47e9-92db-491404d8ec40","Type":"ContainerDied","Data":"ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5"} Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.071350 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" event={"ID":"128e6487-d288-47e9-92db-491404d8ec40","Type":"ContainerDied","Data":"5771e6f435334ee7735999084bf34306b514812b4df6ff62d40ae21cdfaa79d8"} Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.071382 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck7qg" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.094515 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4"] Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.097182 4975 scope.go:117] "RemoveContainer" containerID="c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.098576 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6csc4"] Jan 22 10:42:21 crc kubenswrapper[4975]: E0122 10:42:21.101473 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a\": container with ID starting with c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a not found: ID does not exist" containerID="c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.101733 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a"} err="failed to get container status \"c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a\": rpc error: code = NotFound desc = could not find container \"c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a\": container with ID starting with c42c328444b519ac882f6600a3ac0b4188b02ebc348a1a624adb0806babb262a not found: ID does not exist" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.102369 4975 scope.go:117] "RemoveContainer" containerID="ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.105929 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck7qg"] Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.118896 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck7qg"] Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.126864 4975 scope.go:117] "RemoveContainer" containerID="ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5" Jan 22 10:42:21 crc kubenswrapper[4975]: E0122 10:42:21.127386 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5\": container with ID starting with ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5 not found: ID does not exist" containerID="ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.127424 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5"} err="failed to get container status \"ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5\": rpc error: code = NotFound desc = could not find container \"ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5\": container with ID starting with ddfb06572f09cc12ca401ada0c67d42b9ce8cb1014b436442c65e3def03051c5 not found: ID does not exist" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.764910 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128e6487-d288-47e9-92db-491404d8ec40" path="/var/lib/kubelet/pods/128e6487-d288-47e9-92db-491404d8ec40/volumes" Jan 22 10:42:21 crc kubenswrapper[4975]: I0122 10:42:21.767826 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20dff28e-fc21-42b9-9c2a-1ee8588ed83e" path="/var/lib/kubelet/pods/20dff28e-fc21-42b9-9c2a-1ee8588ed83e/volumes" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.165865 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5"] Jan 22 10:42:22 crc kubenswrapper[4975]: E0122 10:42:22.166257 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128e6487-d288-47e9-92db-491404d8ec40" containerName="controller-manager" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166277 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="128e6487-d288-47e9-92db-491404d8ec40" containerName="controller-manager" Jan 22 10:42:22 crc kubenswrapper[4975]: E0122 10:42:22.166288 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166296 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 10:42:22 crc kubenswrapper[4975]: E0122 10:42:22.166311 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20dff28e-fc21-42b9-9c2a-1ee8588ed83e" containerName="route-controller-manager" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166319 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="20dff28e-fc21-42b9-9c2a-1ee8588ed83e" containerName="route-controller-manager" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166463 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="20dff28e-fc21-42b9-9c2a-1ee8588ed83e" containerName="route-controller-manager" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166482 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166493 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="128e6487-d288-47e9-92db-491404d8ec40" containerName="controller-manager" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.166976 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.172802 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.173966 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb"] Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.175141 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.176520 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.176547 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.176574 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.176859 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.177817 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.178890 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.178892 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.179239 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.183059 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.183203 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.183409 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb"] Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.186882 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.190898 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.193034 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5"] Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.306599 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-config\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.306685 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a1eecee-490f-4b54-a976-e9832770eebe-client-ca\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.306839 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1eecee-490f-4b54-a976-e9832770eebe-serving-cert\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.306896 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwnm\" (UniqueName: \"kubernetes.io/projected/9a1eecee-490f-4b54-a976-e9832770eebe-kube-api-access-fvwnm\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.306954 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-serving-cert\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.307029 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1eecee-490f-4b54-a976-e9832770eebe-config\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.307090 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.307149 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njhs\" (UniqueName: \"kubernetes.io/projected/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-kube-api-access-4njhs\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.307375 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-client-ca\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408527 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1eecee-490f-4b54-a976-e9832770eebe-serving-cert\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408631 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwnm\" (UniqueName: \"kubernetes.io/projected/9a1eecee-490f-4b54-a976-e9832770eebe-kube-api-access-fvwnm\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408682 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-serving-cert\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408755 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1eecee-490f-4b54-a976-e9832770eebe-config\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408796 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408855 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njhs\" (UniqueName: \"kubernetes.io/projected/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-kube-api-access-4njhs\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408931 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-client-ca\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.408985 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-config\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.409027 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a1eecee-490f-4b54-a976-e9832770eebe-client-ca\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.410135 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-client-ca\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.410380 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.410844 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a1eecee-490f-4b54-a976-e9832770eebe-client-ca\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.410922 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-config\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.411164 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a1eecee-490f-4b54-a976-e9832770eebe-config\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.422550 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-serving-cert\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.422646 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1eecee-490f-4b54-a976-e9832770eebe-serving-cert\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.435586 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njhs\" (UniqueName: \"kubernetes.io/projected/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-kube-api-access-4njhs\") pod \"controller-manager-cfcdf47c7-bjjbb\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.438472 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwnm\" (UniqueName: \"kubernetes.io/projected/9a1eecee-490f-4b54-a976-e9832770eebe-kube-api-access-fvwnm\") pod \"route-controller-manager-c94996dc5-cmxp5\" (UID: \"9a1eecee-490f-4b54-a976-e9832770eebe\") " pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.500634 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.510650 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.769089 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb"] Jan 22 10:42:22 crc kubenswrapper[4975]: W0122 10:42:22.780454 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc02cca59_a13b_4ce5_8239_ec0cf9eae6e0.slice/crio-ab90794a6236929fdd1ee2e29c82c93ba0142dc04c82fdd4c063cc520e4f49d4 WatchSource:0}: Error finding container ab90794a6236929fdd1ee2e29c82c93ba0142dc04c82fdd4c063cc520e4f49d4: Status 404 returned error can't find the container with id ab90794a6236929fdd1ee2e29c82c93ba0142dc04c82fdd4c063cc520e4f49d4 Jan 22 10:42:22 crc kubenswrapper[4975]: I0122 10:42:22.990963 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5"] Jan 22 10:42:23 crc kubenswrapper[4975]: W0122 10:42:23.001429 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a1eecee_490f_4b54_a976_e9832770eebe.slice/crio-7432845fc383a2d8ac4f38bb912ab7f3134e38db057bb6dc8257c9778a0bce70 WatchSource:0}: Error finding container 7432845fc383a2d8ac4f38bb912ab7f3134e38db057bb6dc8257c9778a0bce70: Status 404 returned error can't find the container with id 7432845fc383a2d8ac4f38bb912ab7f3134e38db057bb6dc8257c9778a0bce70 Jan 22 10:42:23 crc kubenswrapper[4975]: I0122 10:42:23.108205 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" event={"ID":"9a1eecee-490f-4b54-a976-e9832770eebe","Type":"ContainerStarted","Data":"7432845fc383a2d8ac4f38bb912ab7f3134e38db057bb6dc8257c9778a0bce70"} Jan 22 10:42:23 crc kubenswrapper[4975]: I0122 10:42:23.111927 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" event={"ID":"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0","Type":"ContainerStarted","Data":"ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468"} Jan 22 10:42:23 crc kubenswrapper[4975]: I0122 10:42:23.111976 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" event={"ID":"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0","Type":"ContainerStarted","Data":"ab90794a6236929fdd1ee2e29c82c93ba0142dc04c82fdd4c063cc520e4f49d4"} Jan 22 10:42:23 crc kubenswrapper[4975]: I0122 10:42:23.112525 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:23 crc kubenswrapper[4975]: I0122 10:42:23.126925 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:42:23 crc kubenswrapper[4975]: I0122 10:42:23.137422 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" podStartSLOduration=3.13740048 podStartE2EDuration="3.13740048s" podCreationTimestamp="2026-01-22 10:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:23.134302648 +0000 UTC m=+335.792338748" watchObservedRunningTime="2026-01-22 10:42:23.13740048 +0000 UTC m=+335.795436590" Jan 22 10:42:24 crc kubenswrapper[4975]: I0122 10:42:24.119958 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" event={"ID":"9a1eecee-490f-4b54-a976-e9832770eebe","Type":"ContainerStarted","Data":"aa90f107648b1415d9c183eaaba12e623194cfa552f48e6b84cf70be005b563c"} Jan 22 10:42:24 crc kubenswrapper[4975]: I0122 10:42:24.145389 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" podStartSLOduration=4.145367953 podStartE2EDuration="4.145367953s" podCreationTimestamp="2026-01-22 10:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:24.142618702 +0000 UTC m=+336.800654812" watchObservedRunningTime="2026-01-22 10:42:24.145367953 +0000 UTC m=+336.803404093" Jan 22 10:42:25 crc kubenswrapper[4975]: I0122 10:42:25.126105 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:25 crc kubenswrapper[4975]: I0122 10:42:25.133611 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c94996dc5-cmxp5" Jan 22 10:42:57 crc kubenswrapper[4975]: I0122 10:42:57.437040 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:42:57 crc kubenswrapper[4975]: I0122 10:42:57.437570 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.414448 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6cc7"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.415198 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q6cc7" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="registry-server" containerID="cri-o://bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1" gracePeriod=30 Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.431163 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjq6r"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.431481 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qjq6r" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="registry-server" containerID="cri-o://39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912" gracePeriod=30 Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.438516 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdlsh"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.438719 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" containerID="cri-o://9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d" gracePeriod=30 Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.448466 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg94w"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.448741 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pg94w" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="registry-server" containerID="cri-o://2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed" gracePeriod=30 Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.454219 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvzvd"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.454512 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lvzvd" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="registry-server" containerID="cri-o://e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" gracePeriod=30 Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.463938 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pkfxd"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.464724 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.475773 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pkfxd"] Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.539614 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6cc0ba15-cad5-416b-8428-6614ca1ebbec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.539843 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4bn\" (UniqueName: \"kubernetes.io/projected/6cc0ba15-cad5-416b-8428-6614ca1ebbec-kube-api-access-tf4bn\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.539972 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0ba15-cad5-416b-8428-6614ca1ebbec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.641141 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6cc0ba15-cad5-416b-8428-6614ca1ebbec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.641219 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4bn\" (UniqueName: \"kubernetes.io/projected/6cc0ba15-cad5-416b-8428-6614ca1ebbec-kube-api-access-tf4bn\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.641243 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0ba15-cad5-416b-8428-6614ca1ebbec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.642798 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0ba15-cad5-416b-8428-6614ca1ebbec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.651032 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6cc0ba15-cad5-416b-8428-6614ca1ebbec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.662076 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4bn\" (UniqueName: \"kubernetes.io/projected/6cc0ba15-cad5-416b-8428-6614ca1ebbec-kube-api-access-tf4bn\") pod \"marketplace-operator-79b997595-pkfxd\" (UID: \"6cc0ba15-cad5-416b-8428-6614ca1ebbec\") " pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: E0122 10:43:13.795034 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3 is running failed: container process not found" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:43:13 crc kubenswrapper[4975]: E0122 10:43:13.795485 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3 is running failed: container process not found" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:43:13 crc kubenswrapper[4975]: E0122 10:43:13.795949 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3 is running failed: container process not found" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:43:13 crc kubenswrapper[4975]: E0122 10:43:13.795992 4975 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-lvzvd" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="registry-server" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.867291 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.892577 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.933505 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.946088 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-utilities\") pod \"27a32b6c-0030-4c5a-bdbe-61722e22e813\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.946176 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-catalog-content\") pod \"27a32b6c-0030-4c5a-bdbe-61722e22e813\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.946207 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd57v\" (UniqueName: \"kubernetes.io/projected/27a32b6c-0030-4c5a-bdbe-61722e22e813-kube-api-access-nd57v\") pod \"27a32b6c-0030-4c5a-bdbe-61722e22e813\" (UID: \"27a32b6c-0030-4c5a-bdbe-61722e22e813\") " Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.947362 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-utilities" (OuterVolumeSpecName: "utilities") pod "27a32b6c-0030-4c5a-bdbe-61722e22e813" (UID: "27a32b6c-0030-4c5a-bdbe-61722e22e813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.949936 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.951305 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.953722 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a32b6c-0030-4c5a-bdbe-61722e22e813-kube-api-access-nd57v" (OuterVolumeSpecName: "kube-api-access-nd57v") pod "27a32b6c-0030-4c5a-bdbe-61722e22e813" (UID: "27a32b6c-0030-4c5a-bdbe-61722e22e813"). InnerVolumeSpecName "kube-api-access-nd57v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.965142 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:43:13 crc kubenswrapper[4975]: I0122 10:43:13.965859 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.036075 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27a32b6c-0030-4c5a-bdbe-61722e22e813" (UID: "27a32b6c-0030-4c5a-bdbe-61722e22e813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050558 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgjb\" (UniqueName: \"kubernetes.io/projected/7ae0356f-f7ce-4d10-bef2-c322e09f9889-kube-api-access-hbgjb\") pod \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050785 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-utilities\") pod \"2ef0df15-7a94-4740-9601-0a29b068f5e7\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050836 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hsg\" (UniqueName: \"kubernetes.io/projected/2ef0df15-7a94-4740-9601-0a29b068f5e7-kube-api-access-w4hsg\") pod \"2ef0df15-7a94-4740-9601-0a29b068f5e7\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050875 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qwcw\" (UniqueName: \"kubernetes.io/projected/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-kube-api-access-4qwcw\") pod \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050897 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca\") pod \"48c3f95e-78eb-4e10-bd32-4c4644190f01\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050916 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw6ll\" (UniqueName: \"kubernetes.io/projected/48c3f95e-78eb-4e10-bd32-4c4644190f01-kube-api-access-gw6ll\") pod \"48c3f95e-78eb-4e10-bd32-4c4644190f01\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050943 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-utilities\") pod \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050973 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics\") pod \"48c3f95e-78eb-4e10-bd32-4c4644190f01\" (UID: \"48c3f95e-78eb-4e10-bd32-4c4644190f01\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.050992 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-utilities\") pod \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.051024 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-catalog-content\") pod \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\" (UID: \"9bb39999-a9af-4a3b-bc78-427d76a9fbd9\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.051047 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-catalog-content\") pod \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\" (UID: \"7ae0356f-f7ce-4d10-bef2-c322e09f9889\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.051064 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-catalog-content\") pod \"2ef0df15-7a94-4740-9601-0a29b068f5e7\" (UID: \"2ef0df15-7a94-4740-9601-0a29b068f5e7\") " Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.051244 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a32b6c-0030-4c5a-bdbe-61722e22e813-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.051255 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd57v\" (UniqueName: \"kubernetes.io/projected/27a32b6c-0030-4c5a-bdbe-61722e22e813-kube-api-access-nd57v\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.052272 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-utilities" (OuterVolumeSpecName: "utilities") pod "7ae0356f-f7ce-4d10-bef2-c322e09f9889" (UID: "7ae0356f-f7ce-4d10-bef2-c322e09f9889"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.052442 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-utilities" (OuterVolumeSpecName: "utilities") pod "9bb39999-a9af-4a3b-bc78-427d76a9fbd9" (UID: "9bb39999-a9af-4a3b-bc78-427d76a9fbd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.053093 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "48c3f95e-78eb-4e10-bd32-4c4644190f01" (UID: "48c3f95e-78eb-4e10-bd32-4c4644190f01"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.056063 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-kube-api-access-4qwcw" (OuterVolumeSpecName: "kube-api-access-4qwcw") pod "9bb39999-a9af-4a3b-bc78-427d76a9fbd9" (UID: "9bb39999-a9af-4a3b-bc78-427d76a9fbd9"). InnerVolumeSpecName "kube-api-access-4qwcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.056361 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef0df15-7a94-4740-9601-0a29b068f5e7-kube-api-access-w4hsg" (OuterVolumeSpecName: "kube-api-access-w4hsg") pod "2ef0df15-7a94-4740-9601-0a29b068f5e7" (UID: "2ef0df15-7a94-4740-9601-0a29b068f5e7"). InnerVolumeSpecName "kube-api-access-w4hsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.056904 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-utilities" (OuterVolumeSpecName: "utilities") pod "2ef0df15-7a94-4740-9601-0a29b068f5e7" (UID: "2ef0df15-7a94-4740-9601-0a29b068f5e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.057939 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "48c3f95e-78eb-4e10-bd32-4c4644190f01" (UID: "48c3f95e-78eb-4e10-bd32-4c4644190f01"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.057960 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c3f95e-78eb-4e10-bd32-4c4644190f01-kube-api-access-gw6ll" (OuterVolumeSpecName: "kube-api-access-gw6ll") pod "48c3f95e-78eb-4e10-bd32-4c4644190f01" (UID: "48c3f95e-78eb-4e10-bd32-4c4644190f01"). InnerVolumeSpecName "kube-api-access-gw6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.070475 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae0356f-f7ce-4d10-bef2-c322e09f9889-kube-api-access-hbgjb" (OuterVolumeSpecName: "kube-api-access-hbgjb") pod "7ae0356f-f7ce-4d10-bef2-c322e09f9889" (UID: "7ae0356f-f7ce-4d10-bef2-c322e09f9889"). InnerVolumeSpecName "kube-api-access-hbgjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.075162 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ef0df15-7a94-4740-9601-0a29b068f5e7" (UID: "2ef0df15-7a94-4740-9601-0a29b068f5e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.111946 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ae0356f-f7ce-4d10-bef2-c322e09f9889" (UID: "7ae0356f-f7ce-4d10-bef2-c322e09f9889"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152765 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152822 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152835 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbgjb\" (UniqueName: \"kubernetes.io/projected/7ae0356f-f7ce-4d10-bef2-c322e09f9889-kube-api-access-hbgjb\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152871 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ef0df15-7a94-4740-9601-0a29b068f5e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152884 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hsg\" (UniqueName: \"kubernetes.io/projected/2ef0df15-7a94-4740-9601-0a29b068f5e7-kube-api-access-w4hsg\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152895 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qwcw\" (UniqueName: \"kubernetes.io/projected/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-kube-api-access-4qwcw\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152906 4975 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152916 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw6ll\" (UniqueName: \"kubernetes.io/projected/48c3f95e-78eb-4e10-bd32-4c4644190f01-kube-api-access-gw6ll\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152968 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152979 4975 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48c3f95e-78eb-4e10-bd32-4c4644190f01-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.152992 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae0356f-f7ce-4d10-bef2-c322e09f9889-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.166458 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bb39999-a9af-4a3b-bc78-427d76a9fbd9" (UID: "9bb39999-a9af-4a3b-bc78-427d76a9fbd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.253889 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb39999-a9af-4a3b-bc78-427d76a9fbd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.286900 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pkfxd"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.438218 4975 generic.go:334] "Generic (PLEG): container finished" podID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerID="9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d" exitCode=0 Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.438290 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" event={"ID":"48c3f95e-78eb-4e10-bd32-4c4644190f01","Type":"ContainerDied","Data":"9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.438323 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" event={"ID":"48c3f95e-78eb-4e10-bd32-4c4644190f01","Type":"ContainerDied","Data":"ee0df0bdfe719c54c1c48f65530502d4a3e883a32e51461e5b9d832d9afc00d6"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.438397 4975 scope.go:117] "RemoveContainer" containerID="9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.438505 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gdlsh" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.443192 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" event={"ID":"6cc0ba15-cad5-416b-8428-6614ca1ebbec","Type":"ContainerStarted","Data":"58d4bf4b627c2c119c5f8690d620e4a47d2f58d589657f10072b61d4dd3a758f"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.443277 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" event={"ID":"6cc0ba15-cad5-416b-8428-6614ca1ebbec","Type":"ContainerStarted","Data":"3e87a5f70b9beb0bb5d8315b1aa52167de6b04e11b9968483049407bd5f6d8f4"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.446940 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.447582 4975 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pkfxd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.447637 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" podUID="6cc0ba15-cad5-416b-8428-6614ca1ebbec" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.456922 4975 generic.go:334] "Generic (PLEG): container finished" podID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" exitCode=0 Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.457050 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerDied","Data":"e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.457096 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvzvd" event={"ID":"9bb39999-a9af-4a3b-bc78-427d76a9fbd9","Type":"ContainerDied","Data":"64f07a25ca32f27875b0dd29a01c6b103eb86c2d2dd859b8cd9ec130fa411d55"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.457105 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvzvd" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.461400 4975 scope.go:117] "RemoveContainer" containerID="c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.463674 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6cc7" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.463734 4975 generic.go:334] "Generic (PLEG): container finished" podID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerID="bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1" exitCode=0 Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.463815 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6cc7" event={"ID":"27a32b6c-0030-4c5a-bdbe-61722e22e813","Type":"ContainerDied","Data":"bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.464124 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6cc7" event={"ID":"27a32b6c-0030-4c5a-bdbe-61722e22e813","Type":"ContainerDied","Data":"22f57e1dcda7b443d30e6926c571a4b16ad4d778f39cf9771bfdfe63471a61c0"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.470584 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjq6r" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.478393 4975 generic.go:334] "Generic (PLEG): container finished" podID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerID="39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912" exitCode=0 Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.478494 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjq6r" event={"ID":"7ae0356f-f7ce-4d10-bef2-c322e09f9889","Type":"ContainerDied","Data":"39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.478520 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjq6r" event={"ID":"7ae0356f-f7ce-4d10-bef2-c322e09f9889","Type":"ContainerDied","Data":"66c05d5d4ede880fbe02d45dd0b3b3309312e94ef53164c396dec5aea8927c24"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.478994 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" podStartSLOduration=1.478980169 podStartE2EDuration="1.478980169s" podCreationTimestamp="2026-01-22 10:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:14.470223131 +0000 UTC m=+387.128259281" watchObservedRunningTime="2026-01-22 10:43:14.478980169 +0000 UTC m=+387.137016279" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.503568 4975 scope.go:117] "RemoveContainer" containerID="9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.504532 4975 generic.go:334] "Generic (PLEG): container finished" podID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerID="2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed" exitCode=0 Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.504589 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg94w" event={"ID":"2ef0df15-7a94-4740-9601-0a29b068f5e7","Type":"ContainerDied","Data":"2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.504635 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg94w" event={"ID":"2ef0df15-7a94-4740-9601-0a29b068f5e7","Type":"ContainerDied","Data":"d161ebc6643d3d4c00ce0ee687a8907475cbabdeea6b38b876f1497f1345c976"} Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.504743 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg94w" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.504787 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d\": container with ID starting with 9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d not found: ID does not exist" containerID="9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.504822 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d"} err="failed to get container status \"9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d\": rpc error: code = NotFound desc = could not find container \"9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d\": container with ID starting with 9a17662c029f08267eb99e6539cfc7b8495042363389b81b136ccd05b4120a7d not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.508549 4975 scope.go:117] "RemoveContainer" containerID="c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.508992 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4\": container with ID starting with c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4 not found: ID does not exist" containerID="c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.509043 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4"} err="failed to get container status \"c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4\": rpc error: code = NotFound desc = could not find container \"c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4\": container with ID starting with c62690c8c243f7089d922b8d5d9ed2c4dbc291213b00f69b71a244a2a77611e4 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.509071 4975 scope.go:117] "RemoveContainer" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.532630 4975 scope.go:117] "RemoveContainer" containerID="6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.533269 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdlsh"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.557254 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdlsh"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.562643 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjq6r"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.571683 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qjq6r"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.578985 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6cc7"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.585189 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q6cc7"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.589157 4975 scope.go:117] "RemoveContainer" containerID="49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.593025 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvzvd"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.598514 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lvzvd"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.602860 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg94w"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.606345 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg94w"] Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.622053 4975 scope.go:117] "RemoveContainer" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.622467 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3\": container with ID starting with e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3 not found: ID does not exist" containerID="e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.622768 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3"} err="failed to get container status \"e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3\": rpc error: code = NotFound desc = could not find container \"e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3\": container with ID starting with e25ebb216bb4c2eff77da0d3f4fef9a5bc31e5681123234a614a2d03573045d3 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.622822 4975 scope.go:117] "RemoveContainer" containerID="6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.623108 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9\": container with ID starting with 6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9 not found: ID does not exist" containerID="6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.623155 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9"} err="failed to get container status \"6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9\": rpc error: code = NotFound desc = could not find container \"6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9\": container with ID starting with 6faec56f6f319b34323a7f12731788f5ad5e4f25aa9ce91366aa05a5648273a9 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.623188 4975 scope.go:117] "RemoveContainer" containerID="49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.623722 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217\": container with ID starting with 49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217 not found: ID does not exist" containerID="49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.623748 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217"} err="failed to get container status \"49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217\": rpc error: code = NotFound desc = could not find container \"49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217\": container with ID starting with 49e44f40801548339eb8e924fe0d71bce983c40a4b7ca08cf7ae788396616217 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.623762 4975 scope.go:117] "RemoveContainer" containerID="bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.638222 4975 scope.go:117] "RemoveContainer" containerID="3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.656890 4975 scope.go:117] "RemoveContainer" containerID="033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.669270 4975 scope.go:117] "RemoveContainer" containerID="bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.669887 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1\": container with ID starting with bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1 not found: ID does not exist" containerID="bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.669939 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1"} err="failed to get container status \"bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1\": rpc error: code = NotFound desc = could not find container \"bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1\": container with ID starting with bec708c7832e92b3d2f42e52205dc1e4ffa52f5283a6cba7a47b7ab8481376e1 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.669967 4975 scope.go:117] "RemoveContainer" containerID="3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.670478 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447\": container with ID starting with 3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447 not found: ID does not exist" containerID="3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.670518 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447"} err="failed to get container status \"3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447\": rpc error: code = NotFound desc = could not find container \"3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447\": container with ID starting with 3175720d172ad2f89c9ad57ea37cab5eea78167d720c70ad42d41e0eaaa1d447 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.670546 4975 scope.go:117] "RemoveContainer" containerID="033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.671620 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365\": container with ID starting with 033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365 not found: ID does not exist" containerID="033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.671647 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365"} err="failed to get container status \"033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365\": rpc error: code = NotFound desc = could not find container \"033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365\": container with ID starting with 033db6467ace7487f4e4b16e28ee75aebeaf6fe8df056ed9764e201232f0f365 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.671661 4975 scope.go:117] "RemoveContainer" containerID="39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.688605 4975 scope.go:117] "RemoveContainer" containerID="f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.709082 4975 scope.go:117] "RemoveContainer" containerID="624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.722177 4975 scope.go:117] "RemoveContainer" containerID="39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.722591 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912\": container with ID starting with 39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912 not found: ID does not exist" containerID="39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.722629 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912"} err="failed to get container status \"39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912\": rpc error: code = NotFound desc = could not find container \"39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912\": container with ID starting with 39121037b7f349c9c02f559cdd236db0939a6a71ba8450e1aca7c8d0f4f94912 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.722655 4975 scope.go:117] "RemoveContainer" containerID="f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.722907 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e\": container with ID starting with f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e not found: ID does not exist" containerID="f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.722941 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e"} err="failed to get container status \"f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e\": rpc error: code = NotFound desc = could not find container \"f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e\": container with ID starting with f90daa6c9d3f27be0d6a064a67bcf423a5fffbf0430af86d2346f6c544e5e83e not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.722967 4975 scope.go:117] "RemoveContainer" containerID="624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.723264 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c\": container with ID starting with 624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c not found: ID does not exist" containerID="624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.723300 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c"} err="failed to get container status \"624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c\": rpc error: code = NotFound desc = could not find container \"624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c\": container with ID starting with 624c2c1ee0c9158dd89dd8b2dea69b1c4e8662a115ad7773502edffe3066601c not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.723342 4975 scope.go:117] "RemoveContainer" containerID="2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.735377 4975 scope.go:117] "RemoveContainer" containerID="7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.745556 4975 scope.go:117] "RemoveContainer" containerID="e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.759304 4975 scope.go:117] "RemoveContainer" containerID="2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.759631 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed\": container with ID starting with 2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed not found: ID does not exist" containerID="2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.759662 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed"} err="failed to get container status \"2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed\": rpc error: code = NotFound desc = could not find container \"2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed\": container with ID starting with 2e6268aba06753df6e4cfe429f70bf07dedcd3f22591ababbac9f2b05aa6a5ed not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.759679 4975 scope.go:117] "RemoveContainer" containerID="7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.759930 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4\": container with ID starting with 7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4 not found: ID does not exist" containerID="7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.759951 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4"} err="failed to get container status \"7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4\": rpc error: code = NotFound desc = could not find container \"7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4\": container with ID starting with 7d82657a62cc5c77d731738ce1e9c8a9bdcdcd20efa8c787c19532962ca473d4 not found: ID does not exist" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.759963 4975 scope.go:117] "RemoveContainer" containerID="e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b" Jan 22 10:43:14 crc kubenswrapper[4975]: E0122 10:43:14.760164 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b\": container with ID starting with e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b not found: ID does not exist" containerID="e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b" Jan 22 10:43:14 crc kubenswrapper[4975]: I0122 10:43:14.760185 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b"} err="failed to get container status \"e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b\": rpc error: code = NotFound desc = could not find container \"e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b\": container with ID starting with e3579f4266ff297bcd4e9eeccb5f31aaa7a841bfb67a0ef6e87d0fa290d90c9b not found: ID does not exist" Jan 22 10:43:15 crc kubenswrapper[4975]: I0122 10:43:15.906767 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" path="/var/lib/kubelet/pods/27a32b6c-0030-4c5a-bdbe-61722e22e813/volumes" Jan 22 10:43:15 crc kubenswrapper[4975]: I0122 10:43:15.907714 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" path="/var/lib/kubelet/pods/2ef0df15-7a94-4740-9601-0a29b068f5e7/volumes" Jan 22 10:43:15 crc kubenswrapper[4975]: I0122 10:43:15.908595 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" path="/var/lib/kubelet/pods/48c3f95e-78eb-4e10-bd32-4c4644190f01/volumes" Jan 22 10:43:15 crc kubenswrapper[4975]: I0122 10:43:15.909398 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" path="/var/lib/kubelet/pods/7ae0356f-f7ce-4d10-bef2-c322e09f9889/volumes" Jan 22 10:43:15 crc kubenswrapper[4975]: I0122 10:43:15.910179 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" path="/var/lib/kubelet/pods/9bb39999-a9af-4a3b-bc78-427d76a9fbd9/volumes" Jan 22 10:43:15 crc kubenswrapper[4975]: I0122 10:43:15.915175 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pkfxd" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028289 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vhcx5"] Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028521 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028534 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028547 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028553 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028561 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028569 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028577 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028583 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028590 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028596 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028607 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028613 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028621 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028626 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028633 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028639 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028647 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028653 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028660 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028666 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028677 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028683 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028691 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028696 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028704 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028709 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="extract-content" Jan 22 10:43:16 crc kubenswrapper[4975]: E0122 10:43:16.028717 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028723 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="extract-utilities" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028817 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb39999-a9af-4a3b-bc78-427d76a9fbd9" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028829 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae0356f-f7ce-4d10-bef2-c322e09f9889" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028836 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028844 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a32b6c-0030-4c5a-bdbe-61722e22e813" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028850 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef0df15-7a94-4740-9601-0a29b068f5e7" containerName="registry-server" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.028996 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c3f95e-78eb-4e10-bd32-4c4644190f01" containerName="marketplace-operator" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.029505 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.031169 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.038947 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhcx5"] Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.090994 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c75e99-22ed-439b-a7a3-e3d4163d7809-catalog-content\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.091047 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6x4\" (UniqueName: \"kubernetes.io/projected/f3c75e99-22ed-439b-a7a3-e3d4163d7809-kube-api-access-vx6x4\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.091092 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c75e99-22ed-439b-a7a3-e3d4163d7809-utilities\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.192295 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c75e99-22ed-439b-a7a3-e3d4163d7809-utilities\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.192392 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c75e99-22ed-439b-a7a3-e3d4163d7809-catalog-content\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.192428 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6x4\" (UniqueName: \"kubernetes.io/projected/f3c75e99-22ed-439b-a7a3-e3d4163d7809-kube-api-access-vx6x4\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.192772 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c75e99-22ed-439b-a7a3-e3d4163d7809-utilities\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.192854 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c75e99-22ed-439b-a7a3-e3d4163d7809-catalog-content\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.216250 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6x4\" (UniqueName: \"kubernetes.io/projected/f3c75e99-22ed-439b-a7a3-e3d4163d7809-kube-api-access-vx6x4\") pod \"redhat-operators-vhcx5\" (UID: \"f3c75e99-22ed-439b-a7a3-e3d4163d7809\") " pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.358559 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.548592 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhcx5"] Jan 22 10:43:16 crc kubenswrapper[4975]: W0122 10:43:16.557535 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c75e99_22ed_439b_a7a3_e3d4163d7809.slice/crio-c6fc32a1e18ac296722aff6014c449d2c2aabf7e40c8f4a451068847edbf566d WatchSource:0}: Error finding container c6fc32a1e18ac296722aff6014c449d2c2aabf7e40c8f4a451068847edbf566d: Status 404 returned error can't find the container with id c6fc32a1e18ac296722aff6014c449d2c2aabf7e40c8f4a451068847edbf566d Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.918407 4975 generic.go:334] "Generic (PLEG): container finished" podID="f3c75e99-22ed-439b-a7a3-e3d4163d7809" containerID="aa0e00c567c1857ca57a1517be3ec7597ce832ed4c5526bf3d1c9928d23cbbf5" exitCode=0 Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.918504 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhcx5" event={"ID":"f3c75e99-22ed-439b-a7a3-e3d4163d7809","Type":"ContainerDied","Data":"aa0e00c567c1857ca57a1517be3ec7597ce832ed4c5526bf3d1c9928d23cbbf5"} Jan 22 10:43:16 crc kubenswrapper[4975]: I0122 10:43:16.918778 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhcx5" event={"ID":"f3c75e99-22ed-439b-a7a3-e3d4163d7809","Type":"ContainerStarted","Data":"c6fc32a1e18ac296722aff6014c449d2c2aabf7e40c8f4a451068847edbf566d"} Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.033210 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfznn"] Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.034298 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.037105 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.042327 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfznn"] Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.104769 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-utilities\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.104866 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-catalog-content\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.104984 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rf86\" (UniqueName: \"kubernetes.io/projected/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-kube-api-access-8rf86\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.206203 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-utilities\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.206441 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-catalog-content\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.206486 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rf86\" (UniqueName: \"kubernetes.io/projected/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-kube-api-access-8rf86\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.206825 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-utilities\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.206958 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-catalog-content\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.225744 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rf86\" (UniqueName: \"kubernetes.io/projected/97ce1bb8-c814-40c5-a416-a5f6f5e30d2d-kube-api-access-8rf86\") pod \"certified-operators-sfznn\" (UID: \"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d\") " pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.357286 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.770468 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfznn"] Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.924838 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfznn" event={"ID":"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d","Type":"ContainerStarted","Data":"8aefd4ada465edc9109792e73ef55e870bad7cd3409840d0aa111c9b6c2f1698"} Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.924882 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfznn" event={"ID":"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d","Type":"ContainerStarted","Data":"048e3e0e4346b8d8562d272082adea97f6258ebdd5e2242b7da18e3d5dbe75e4"} Jan 22 10:43:17 crc kubenswrapper[4975]: I0122 10:43:17.932904 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhcx5" event={"ID":"f3c75e99-22ed-439b-a7a3-e3d4163d7809","Type":"ContainerStarted","Data":"d49c04de07e938ed74fd366bb89f94862528473250e9bbc0356d77e29c75b42a"} Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.432109 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9fpv2"] Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.433053 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.435427 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.441730 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fpv2"] Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.623971 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e426aac-ee27-4d77-b5ee-a16e9c105097-utilities\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.624048 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e426aac-ee27-4d77-b5ee-a16e9c105097-catalog-content\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.624083 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6zr9\" (UniqueName: \"kubernetes.io/projected/1e426aac-ee27-4d77-b5ee-a16e9c105097-kube-api-access-d6zr9\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.725588 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e426aac-ee27-4d77-b5ee-a16e9c105097-utilities\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.725680 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e426aac-ee27-4d77-b5ee-a16e9c105097-catalog-content\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.725720 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6zr9\" (UniqueName: \"kubernetes.io/projected/1e426aac-ee27-4d77-b5ee-a16e9c105097-kube-api-access-d6zr9\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.726208 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e426aac-ee27-4d77-b5ee-a16e9c105097-utilities\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.726417 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e426aac-ee27-4d77-b5ee-a16e9c105097-catalog-content\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.742515 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6zr9\" (UniqueName: \"kubernetes.io/projected/1e426aac-ee27-4d77-b5ee-a16e9c105097-kube-api-access-d6zr9\") pod \"community-operators-9fpv2\" (UID: \"1e426aac-ee27-4d77-b5ee-a16e9c105097\") " pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.755635 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.945309 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fpv2"] Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.947706 4975 generic.go:334] "Generic (PLEG): container finished" podID="97ce1bb8-c814-40c5-a416-a5f6f5e30d2d" containerID="8aefd4ada465edc9109792e73ef55e870bad7cd3409840d0aa111c9b6c2f1698" exitCode=0 Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.947786 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfznn" event={"ID":"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d","Type":"ContainerDied","Data":"8aefd4ada465edc9109792e73ef55e870bad7cd3409840d0aa111c9b6c2f1698"} Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.951098 4975 generic.go:334] "Generic (PLEG): container finished" podID="f3c75e99-22ed-439b-a7a3-e3d4163d7809" containerID="d49c04de07e938ed74fd366bb89f94862528473250e9bbc0356d77e29c75b42a" exitCode=0 Jan 22 10:43:18 crc kubenswrapper[4975]: I0122 10:43:18.951133 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhcx5" event={"ID":"f3c75e99-22ed-439b-a7a3-e3d4163d7809","Type":"ContainerDied","Data":"d49c04de07e938ed74fd366bb89f94862528473250e9bbc0356d77e29c75b42a"} Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.435467 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lkkwb"] Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.436756 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.439653 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.446467 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkkwb"] Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.535760 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2fg9\" (UniqueName: \"kubernetes.io/projected/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-kube-api-access-j2fg9\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.536034 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-utilities\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.536117 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-catalog-content\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.636745 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2fg9\" (UniqueName: \"kubernetes.io/projected/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-kube-api-access-j2fg9\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.636795 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-utilities\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.636884 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-catalog-content\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.637436 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-utilities\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.637509 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-catalog-content\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.661998 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2fg9\" (UniqueName: \"kubernetes.io/projected/59934cb1-4316-4b56-aaa6-bd2eb9eaa394-kube-api-access-j2fg9\") pod \"redhat-marketplace-lkkwb\" (UID: \"59934cb1-4316-4b56-aaa6-bd2eb9eaa394\") " pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.780891 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.958179 4975 generic.go:334] "Generic (PLEG): container finished" podID="97ce1bb8-c814-40c5-a416-a5f6f5e30d2d" containerID="3692eef574c8964eab1d62d6ce08cf508b891a811349fb8c8dab3e5e38b1526d" exitCode=0 Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.958387 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfznn" event={"ID":"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d","Type":"ContainerDied","Data":"3692eef574c8964eab1d62d6ce08cf508b891a811349fb8c8dab3e5e38b1526d"} Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.964368 4975 generic.go:334] "Generic (PLEG): container finished" podID="1e426aac-ee27-4d77-b5ee-a16e9c105097" containerID="58bd7265fd170410d9dc98b2fc68278cb0c8ce6ad0d73c0b91c91082f5b6324b" exitCode=0 Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.964463 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fpv2" event={"ID":"1e426aac-ee27-4d77-b5ee-a16e9c105097","Type":"ContainerDied","Data":"58bd7265fd170410d9dc98b2fc68278cb0c8ce6ad0d73c0b91c91082f5b6324b"} Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.964495 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fpv2" event={"ID":"1e426aac-ee27-4d77-b5ee-a16e9c105097","Type":"ContainerStarted","Data":"8737c20c76be33f833d67d77a10e708639b0deebd5f028278d7b91ae0a05cbdf"} Jan 22 10:43:19 crc kubenswrapper[4975]: I0122 10:43:19.983027 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhcx5" event={"ID":"f3c75e99-22ed-439b-a7a3-e3d4163d7809","Type":"ContainerStarted","Data":"be4b87064961c1b0ac209b0ac62f842efe6321a780bb70c870b87edc3f024a97"} Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.010563 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vhcx5" podStartSLOduration=1.573015416 podStartE2EDuration="4.010543213s" podCreationTimestamp="2026-01-22 10:43:16 +0000 UTC" firstStartedPulling="2026-01-22 10:43:16.920133502 +0000 UTC m=+389.578169622" lastFinishedPulling="2026-01-22 10:43:19.357661309 +0000 UTC m=+392.015697419" observedRunningTime="2026-01-22 10:43:20.008828584 +0000 UTC m=+392.666864694" watchObservedRunningTime="2026-01-22 10:43:20.010543213 +0000 UTC m=+392.668579313" Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.227250 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkkwb"] Jan 22 10:43:20 crc kubenswrapper[4975]: W0122 10:43:20.247480 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59934cb1_4316_4b56_aaa6_bd2eb9eaa394.slice/crio-9adafc581a5fb446c88afabd455f2bae37f91d85717004eaa478d6f7d0cc4c6c WatchSource:0}: Error finding container 9adafc581a5fb446c88afabd455f2bae37f91d85717004eaa478d6f7d0cc4c6c: Status 404 returned error can't find the container with id 9adafc581a5fb446c88afabd455f2bae37f91d85717004eaa478d6f7d0cc4c6c Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.528161 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb"] Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.528621 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" podUID="c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" containerName="controller-manager" containerID="cri-o://ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468" gracePeriod=30 Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.902471 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.988804 4975 generic.go:334] "Generic (PLEG): container finished" podID="59934cb1-4316-4b56-aaa6-bd2eb9eaa394" containerID="ab5061499a2ac4a57da4f9c9078a80744332836a41e7605ee8ab8ad808ae69af" exitCode=0 Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.988878 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkkwb" event={"ID":"59934cb1-4316-4b56-aaa6-bd2eb9eaa394","Type":"ContainerDied","Data":"ab5061499a2ac4a57da4f9c9078a80744332836a41e7605ee8ab8ad808ae69af"} Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.988905 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkkwb" event={"ID":"59934cb1-4316-4b56-aaa6-bd2eb9eaa394","Type":"ContainerStarted","Data":"9adafc581a5fb446c88afabd455f2bae37f91d85717004eaa478d6f7d0cc4c6c"} Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.991472 4975 generic.go:334] "Generic (PLEG): container finished" podID="c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" containerID="ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468" exitCode=0 Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.991510 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" event={"ID":"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0","Type":"ContainerDied","Data":"ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468"} Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.991526 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" event={"ID":"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0","Type":"ContainerDied","Data":"ab90794a6236929fdd1ee2e29c82c93ba0142dc04c82fdd4c063cc520e4f49d4"} Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.991542 4975 scope.go:117] "RemoveContainer" containerID="ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468" Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.991617 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb" Jan 22 10:43:20 crc kubenswrapper[4975]: I0122 10:43:20.994933 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfznn" event={"ID":"97ce1bb8-c814-40c5-a416-a5f6f5e30d2d","Type":"ContainerStarted","Data":"f1059e01616101e04bddea49130854e6ec861c32be48603b49c12c1a808df1ba"} Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.007557 4975 generic.go:334] "Generic (PLEG): container finished" podID="1e426aac-ee27-4d77-b5ee-a16e9c105097" containerID="fa4816a9d1dd441f5eb7ec521e27a59fb8684a15634fde3b8167492ea8c08bdf" exitCode=0 Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.007782 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fpv2" event={"ID":"1e426aac-ee27-4d77-b5ee-a16e9c105097","Type":"ContainerDied","Data":"fa4816a9d1dd441f5eb7ec521e27a59fb8684a15634fde3b8167492ea8c08bdf"} Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.026889 4975 scope.go:117] "RemoveContainer" containerID="ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.029585 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfznn" podStartSLOduration=2.624126563 podStartE2EDuration="4.029573076s" podCreationTimestamp="2026-01-22 10:43:17 +0000 UTC" firstStartedPulling="2026-01-22 10:43:18.949172604 +0000 UTC m=+391.607208714" lastFinishedPulling="2026-01-22 10:43:20.354619117 +0000 UTC m=+393.012655227" observedRunningTime="2026-01-22 10:43:21.020552766 +0000 UTC m=+393.678588866" watchObservedRunningTime="2026-01-22 10:43:21.029573076 +0000 UTC m=+393.687609186" Jan 22 10:43:21 crc kubenswrapper[4975]: E0122 10:43:21.030480 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468\": container with ID starting with ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468 not found: ID does not exist" containerID="ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.030529 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468"} err="failed to get container status \"ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468\": rpc error: code = NotFound desc = could not find container \"ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468\": container with ID starting with ce7bce6f0fdc7022392993ed9e12b569b054555bf36d8200f8e2efa7f5925468 not found: ID does not exist" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.066526 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-proxy-ca-bundles\") pod \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067092 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-serving-cert\") pod \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067119 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-client-ca\") pod \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067150 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4njhs\" (UniqueName: \"kubernetes.io/projected/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-kube-api-access-4njhs\") pod \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067170 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-config\") pod \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\" (UID: \"c02cca59-a13b-4ce5-8239-ec0cf9eae6e0\") " Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067382 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" (UID: "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067688 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" (UID: "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.067819 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-config" (OuterVolumeSpecName: "config") pod "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" (UID: "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.071940 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-kube-api-access-4njhs" (OuterVolumeSpecName: "kube-api-access-4njhs") pod "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" (UID: "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0"). InnerVolumeSpecName "kube-api-access-4njhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.072674 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" (UID: "c02cca59-a13b-4ce5-8239-ec0cf9eae6e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.169697 4975 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.169731 4975 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.169746 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4njhs\" (UniqueName: \"kubernetes.io/projected/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-kube-api-access-4njhs\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.169758 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.169769 4975 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.324342 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb"] Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.330168 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-bjjbb"] Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.762853 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" path="/var/lib/kubelet/pods/c02cca59-a13b-4ce5-8239-ec0cf9eae6e0/volumes" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.904198 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr"] Jan 22 10:43:21 crc kubenswrapper[4975]: E0122 10:43:21.904463 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" containerName="controller-manager" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.904474 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" containerName="controller-manager" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.904564 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02cca59-a13b-4ce5-8239-ec0cf9eae6e0" containerName="controller-manager" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.905225 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.908159 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.908577 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.908725 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.908905 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.908930 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.909210 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.916780 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.919636 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr"] Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.979385 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-config\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.979447 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-serving-cert\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.979475 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmsq\" (UniqueName: \"kubernetes.io/projected/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-kube-api-access-qcmsq\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.979527 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-proxy-ca-bundles\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:21 crc kubenswrapper[4975]: I0122 10:43:21.979556 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-client-ca\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.014098 4975 generic.go:334] "Generic (PLEG): container finished" podID="59934cb1-4316-4b56-aaa6-bd2eb9eaa394" containerID="236e56e89f6765ccb31a72a076e376d87f58e11a17df4eb7aa70932d08ea0105" exitCode=0 Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.014217 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkkwb" event={"ID":"59934cb1-4316-4b56-aaa6-bd2eb9eaa394","Type":"ContainerDied","Data":"236e56e89f6765ccb31a72a076e376d87f58e11a17df4eb7aa70932d08ea0105"} Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.017596 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fpv2" event={"ID":"1e426aac-ee27-4d77-b5ee-a16e9c105097","Type":"ContainerStarted","Data":"9f037e92c867d7ad6d1659b015998c1e083a7140533cf7720e8e9a35a173a99c"} Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.044433 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9fpv2" podStartSLOduration=2.617492111 podStartE2EDuration="4.044416331s" podCreationTimestamp="2026-01-22 10:43:18 +0000 UTC" firstStartedPulling="2026-01-22 10:43:19.965795653 +0000 UTC m=+392.623831783" lastFinishedPulling="2026-01-22 10:43:21.392719893 +0000 UTC m=+394.050756003" observedRunningTime="2026-01-22 10:43:22.040465129 +0000 UTC m=+394.698501239" watchObservedRunningTime="2026-01-22 10:43:22.044416331 +0000 UTC m=+394.702452441" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.080400 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-config\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.080639 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-serving-cert\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.080696 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmsq\" (UniqueName: \"kubernetes.io/projected/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-kube-api-access-qcmsq\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.080734 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-proxy-ca-bundles\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.080785 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-client-ca\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.081825 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-config\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.082240 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-client-ca\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.082395 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-proxy-ca-bundles\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.089027 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-serving-cert\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.099160 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmsq\" (UniqueName: \"kubernetes.io/projected/3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3-kube-api-access-qcmsq\") pod \"controller-manager-6c5dcfc7df-ktvvr\" (UID: \"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3\") " pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.224623 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:22 crc kubenswrapper[4975]: I0122 10:43:22.687932 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr"] Jan 22 10:43:22 crc kubenswrapper[4975]: W0122 10:43:22.696313 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ebfcfa0_997a_4436_8ddf_2fd82db4d8d3.slice/crio-73a3dc924952c277c4d6b3d52b336e19e16768898790b58e662f1ffa07c08ffe WatchSource:0}: Error finding container 73a3dc924952c277c4d6b3d52b336e19e16768898790b58e662f1ffa07c08ffe: Status 404 returned error can't find the container with id 73a3dc924952c277c4d6b3d52b336e19e16768898790b58e662f1ffa07c08ffe Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.022491 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" event={"ID":"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3","Type":"ContainerStarted","Data":"095933921b4841890e5c9e55c36db278feb543e6cba5839e3c92ed35019c4c94"} Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.023603 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" event={"ID":"3ebfcfa0-997a-4436-8ddf-2fd82db4d8d3","Type":"ContainerStarted","Data":"73a3dc924952c277c4d6b3d52b336e19e16768898790b58e662f1ffa07c08ffe"} Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.023714 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.024552 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkkwb" event={"ID":"59934cb1-4316-4b56-aaa6-bd2eb9eaa394","Type":"ContainerStarted","Data":"83644590d4a3c9da5325c852acfe17c8ce9ef5c871df69ffa0ecfa20fe739648"} Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.032097 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.045888 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c5dcfc7df-ktvvr" podStartSLOduration=3.045870075 podStartE2EDuration="3.045870075s" podCreationTimestamp="2026-01-22 10:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:23.043990171 +0000 UTC m=+395.702026291" watchObservedRunningTime="2026-01-22 10:43:23.045870075 +0000 UTC m=+395.703906185" Jan 22 10:43:23 crc kubenswrapper[4975]: I0122 10:43:23.089122 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lkkwb" podStartSLOduration=2.597904645 podStartE2EDuration="4.08909953s" podCreationTimestamp="2026-01-22 10:43:19 +0000 UTC" firstStartedPulling="2026-01-22 10:43:20.990410395 +0000 UTC m=+393.648446505" lastFinishedPulling="2026-01-22 10:43:22.48160528 +0000 UTC m=+395.139641390" observedRunningTime="2026-01-22 10:43:23.064238462 +0000 UTC m=+395.722274582" watchObservedRunningTime="2026-01-22 10:43:23.08909953 +0000 UTC m=+395.747135650" Jan 22 10:43:26 crc kubenswrapper[4975]: I0122 10:43:26.358934 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:26 crc kubenswrapper[4975]: I0122 10:43:26.359426 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:26 crc kubenswrapper[4975]: I0122 10:43:26.434470 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:27 crc kubenswrapper[4975]: I0122 10:43:27.085356 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vhcx5" Jan 22 10:43:27 crc kubenswrapper[4975]: I0122 10:43:27.358285 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:27 crc kubenswrapper[4975]: I0122 10:43:27.358412 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:27 crc kubenswrapper[4975]: I0122 10:43:27.423589 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:27 crc kubenswrapper[4975]: I0122 10:43:27.437070 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:43:27 crc kubenswrapper[4975]: I0122 10:43:27.437163 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:43:28 crc kubenswrapper[4975]: I0122 10:43:28.113818 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfznn" Jan 22 10:43:28 crc kubenswrapper[4975]: I0122 10:43:28.755981 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:28 crc kubenswrapper[4975]: I0122 10:43:28.756270 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:28 crc kubenswrapper[4975]: I0122 10:43:28.822275 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:29 crc kubenswrapper[4975]: I0122 10:43:29.113124 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9fpv2" Jan 22 10:43:29 crc kubenswrapper[4975]: I0122 10:43:29.781497 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:29 crc kubenswrapper[4975]: I0122 10:43:29.781549 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:29 crc kubenswrapper[4975]: I0122 10:43:29.838252 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:30 crc kubenswrapper[4975]: I0122 10:43:30.099910 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lkkwb" Jan 22 10:43:57 crc kubenswrapper[4975]: I0122 10:43:57.436490 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:43:57 crc kubenswrapper[4975]: I0122 10:43:57.437109 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:43:57 crc kubenswrapper[4975]: I0122 10:43:57.437185 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:43:57 crc kubenswrapper[4975]: I0122 10:43:57.438096 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eda945b61af510720457294a7eb7909c6ec1747b6bf8c3923093ea32bb4739ef"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:43:57 crc kubenswrapper[4975]: I0122 10:43:57.438193 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://eda945b61af510720457294a7eb7909c6ec1747b6bf8c3923093ea32bb4739ef" gracePeriod=600 Jan 22 10:43:58 crc kubenswrapper[4975]: I0122 10:43:58.261325 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="eda945b61af510720457294a7eb7909c6ec1747b6bf8c3923093ea32bb4739ef" exitCode=0 Jan 22 10:43:58 crc kubenswrapper[4975]: I0122 10:43:58.261439 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"eda945b61af510720457294a7eb7909c6ec1747b6bf8c3923093ea32bb4739ef"} Jan 22 10:43:58 crc kubenswrapper[4975]: I0122 10:43:58.261992 4975 scope.go:117] "RemoveContainer" containerID="954234f20f3e7725ea321b3edbe9946c2702d8a2a71885d0aa7a9ed8c50ac4e2" Jan 22 10:43:58 crc kubenswrapper[4975]: I0122 10:43:58.262849 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"acda3fcae4097cbb557b51de8e95309058b3079ea14639b6ff4a35c52220bbdc"} Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.200980 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk"] Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.202814 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.205922 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.207082 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.207962 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk"] Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.394477 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87614eb1-74cf-41fe-8076-ee6ddad974d4-config-volume\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.394763 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87614eb1-74cf-41fe-8076-ee6ddad974d4-secret-volume\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.394811 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crjh\" (UniqueName: \"kubernetes.io/projected/87614eb1-74cf-41fe-8076-ee6ddad974d4-kube-api-access-9crjh\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.495712 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87614eb1-74cf-41fe-8076-ee6ddad974d4-config-volume\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.495767 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87614eb1-74cf-41fe-8076-ee6ddad974d4-secret-volume\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.495825 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crjh\" (UniqueName: \"kubernetes.io/projected/87614eb1-74cf-41fe-8076-ee6ddad974d4-kube-api-access-9crjh\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.497269 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87614eb1-74cf-41fe-8076-ee6ddad974d4-config-volume\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.503211 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87614eb1-74cf-41fe-8076-ee6ddad974d4-secret-volume\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.518928 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crjh\" (UniqueName: \"kubernetes.io/projected/87614eb1-74cf-41fe-8076-ee6ddad974d4-kube-api-access-9crjh\") pod \"collect-profiles-29484645-qsjxk\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:00 crc kubenswrapper[4975]: I0122 10:45:00.529789 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:01 crc kubenswrapper[4975]: I0122 10:45:01.013114 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk"] Jan 22 10:45:01 crc kubenswrapper[4975]: I0122 10:45:01.717026 4975 generic.go:334] "Generic (PLEG): container finished" podID="87614eb1-74cf-41fe-8076-ee6ddad974d4" containerID="412f122b9f563dc086ffebd6c3ea927ee5effe994d8c7aed5e4f18ead5fb17cc" exitCode=0 Jan 22 10:45:01 crc kubenswrapper[4975]: I0122 10:45:01.717093 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" event={"ID":"87614eb1-74cf-41fe-8076-ee6ddad974d4","Type":"ContainerDied","Data":"412f122b9f563dc086ffebd6c3ea927ee5effe994d8c7aed5e4f18ead5fb17cc"} Jan 22 10:45:01 crc kubenswrapper[4975]: I0122 10:45:01.717130 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" event={"ID":"87614eb1-74cf-41fe-8076-ee6ddad974d4","Type":"ContainerStarted","Data":"68d6c6cdf89d7c63ceab31decc3d14adf8a37c9f6653a6727a559f560a564b0a"} Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.054590 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.231477 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crjh\" (UniqueName: \"kubernetes.io/projected/87614eb1-74cf-41fe-8076-ee6ddad974d4-kube-api-access-9crjh\") pod \"87614eb1-74cf-41fe-8076-ee6ddad974d4\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.232087 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87614eb1-74cf-41fe-8076-ee6ddad974d4-secret-volume\") pod \"87614eb1-74cf-41fe-8076-ee6ddad974d4\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.232221 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87614eb1-74cf-41fe-8076-ee6ddad974d4-config-volume\") pod \"87614eb1-74cf-41fe-8076-ee6ddad974d4\" (UID: \"87614eb1-74cf-41fe-8076-ee6ddad974d4\") " Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.233198 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87614eb1-74cf-41fe-8076-ee6ddad974d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "87614eb1-74cf-41fe-8076-ee6ddad974d4" (UID: "87614eb1-74cf-41fe-8076-ee6ddad974d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.240933 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87614eb1-74cf-41fe-8076-ee6ddad974d4-kube-api-access-9crjh" (OuterVolumeSpecName: "kube-api-access-9crjh") pod "87614eb1-74cf-41fe-8076-ee6ddad974d4" (UID: "87614eb1-74cf-41fe-8076-ee6ddad974d4"). InnerVolumeSpecName "kube-api-access-9crjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.244725 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87614eb1-74cf-41fe-8076-ee6ddad974d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87614eb1-74cf-41fe-8076-ee6ddad974d4" (UID: "87614eb1-74cf-41fe-8076-ee6ddad974d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.334045 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crjh\" (UniqueName: \"kubernetes.io/projected/87614eb1-74cf-41fe-8076-ee6ddad974d4-kube-api-access-9crjh\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.334093 4975 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87614eb1-74cf-41fe-8076-ee6ddad974d4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.334112 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87614eb1-74cf-41fe-8076-ee6ddad974d4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.734666 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" event={"ID":"87614eb1-74cf-41fe-8076-ee6ddad974d4","Type":"ContainerDied","Data":"68d6c6cdf89d7c63ceab31decc3d14adf8a37c9f6653a6727a559f560a564b0a"} Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.735028 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d6c6cdf89d7c63ceab31decc3d14adf8a37c9f6653a6727a559f560a564b0a" Jan 22 10:45:03 crc kubenswrapper[4975]: I0122 10:45:03.734831 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk" Jan 22 10:45:48 crc kubenswrapper[4975]: I0122 10:45:48.019860 4975 scope.go:117] "RemoveContainer" containerID="a3b160155690ed814c14af8a7bf354dabb1e670090a3ec3f76c86419c439f5dd" Jan 22 10:45:57 crc kubenswrapper[4975]: I0122 10:45:57.436454 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:45:57 crc kubenswrapper[4975]: I0122 10:45:57.436920 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:46:27 crc kubenswrapper[4975]: I0122 10:46:27.436756 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:46:27 crc kubenswrapper[4975]: I0122 10:46:27.437410 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.437104 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.437748 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.437815 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.438634 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"acda3fcae4097cbb557b51de8e95309058b3079ea14639b6ff4a35c52220bbdc"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.438733 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://acda3fcae4097cbb557b51de8e95309058b3079ea14639b6ff4a35c52220bbdc" gracePeriod=600 Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.580837 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="acda3fcae4097cbb557b51de8e95309058b3079ea14639b6ff4a35c52220bbdc" exitCode=0 Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.580882 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"acda3fcae4097cbb557b51de8e95309058b3079ea14639b6ff4a35c52220bbdc"} Jan 22 10:46:57 crc kubenswrapper[4975]: I0122 10:46:57.581282 4975 scope.go:117] "RemoveContainer" containerID="eda945b61af510720457294a7eb7909c6ec1747b6bf8c3923093ea32bb4739ef" Jan 22 10:46:58 crc kubenswrapper[4975]: I0122 10:46:58.591137 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"a53aa8903fe7a0da9b7319f66f37bade3977b1a8441ec861195439bca40661fa"} Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.699415 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qr7kz"] Jan 22 10:47:51 crc kubenswrapper[4975]: E0122 10:47:51.700602 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87614eb1-74cf-41fe-8076-ee6ddad974d4" containerName="collect-profiles" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.700632 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="87614eb1-74cf-41fe-8076-ee6ddad974d4" containerName="collect-profiles" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.700848 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="87614eb1-74cf-41fe-8076-ee6ddad974d4" containerName="collect-profiles" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.701473 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.726913 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qr7kz"] Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859004 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61216370-98d3-46e1-a8f2-bfaef2b5ec89-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859492 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l55tr\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-kube-api-access-l55tr\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859537 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-registry-tls\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859584 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61216370-98d3-46e1-a8f2-bfaef2b5ec89-registry-certificates\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859636 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61216370-98d3-46e1-a8f2-bfaef2b5ec89-trusted-ca\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859669 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-bound-sa-token\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859707 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61216370-98d3-46e1-a8f2-bfaef2b5ec89-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.859892 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.887195 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61216370-98d3-46e1-a8f2-bfaef2b5ec89-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961361 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l55tr\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-kube-api-access-l55tr\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961400 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-registry-tls\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961458 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61216370-98d3-46e1-a8f2-bfaef2b5ec89-registry-certificates\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961501 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-bound-sa-token\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961536 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61216370-98d3-46e1-a8f2-bfaef2b5ec89-trusted-ca\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.961569 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61216370-98d3-46e1-a8f2-bfaef2b5ec89-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.962252 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61216370-98d3-46e1-a8f2-bfaef2b5ec89-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.964127 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61216370-98d3-46e1-a8f2-bfaef2b5ec89-registry-certificates\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.966284 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61216370-98d3-46e1-a8f2-bfaef2b5ec89-trusted-ca\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.970590 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61216370-98d3-46e1-a8f2-bfaef2b5ec89-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.970689 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-registry-tls\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.989243 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-bound-sa-token\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:51 crc kubenswrapper[4975]: I0122 10:47:51.991740 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l55tr\" (UniqueName: \"kubernetes.io/projected/61216370-98d3-46e1-a8f2-bfaef2b5ec89-kube-api-access-l55tr\") pod \"image-registry-66df7c8f76-qr7kz\" (UID: \"61216370-98d3-46e1-a8f2-bfaef2b5ec89\") " pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:52 crc kubenswrapper[4975]: I0122 10:47:52.022578 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:47:52 crc kubenswrapper[4975]: I0122 10:47:52.293547 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qr7kz"] Jan 22 10:47:52 crc kubenswrapper[4975]: I0122 10:47:52.983954 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" event={"ID":"61216370-98d3-46e1-a8f2-bfaef2b5ec89","Type":"ContainerStarted","Data":"42a80a0876d91414c3da38e44c4bf312736ed4c86307c7f8edbe7571c0b24f4b"} Jan 22 10:47:52 crc kubenswrapper[4975]: I0122 10:47:52.984005 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" event={"ID":"61216370-98d3-46e1-a8f2-bfaef2b5ec89","Type":"ContainerStarted","Data":"96a5a8b688c84c7df9f0f43d9a3e92b9e1a219f6c42c201140e6a562e5ddf7a9"} Jan 22 10:47:52 crc kubenswrapper[4975]: I0122 10:47:52.984259 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:48:12 crc kubenswrapper[4975]: I0122 10:48:12.030948 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" Jan 22 10:48:12 crc kubenswrapper[4975]: I0122 10:48:12.059776 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qr7kz" podStartSLOduration=21.05975283 podStartE2EDuration="21.05975283s" podCreationTimestamp="2026-01-22 10:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:53.018780265 +0000 UTC m=+665.676816415" watchObservedRunningTime="2026-01-22 10:48:12.05975283 +0000 UTC m=+684.717788970" Jan 22 10:48:12 crc kubenswrapper[4975]: I0122 10:48:12.130318 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-94jvp"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.369177 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.370856 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.372927 4975 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7kprk" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.374492 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.379526 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.387106 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-l5qtb"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.388234 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-l5qtb" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.391653 4975 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7ggkg" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.393766 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.400533 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlcb7"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.401254 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.408027 4975 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5hmj4" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.412081 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-l5qtb"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.415507 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlcb7"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.501574 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv5d7\" (UniqueName: \"kubernetes.io/projected/961f5491-5692-4052-a7de-b05dcf68274b-kube-api-access-vv5d7\") pod \"cert-manager-858654f9db-l5qtb\" (UID: \"961f5491-5692-4052-a7de-b05dcf68274b\") " pod="cert-manager/cert-manager-858654f9db-l5qtb" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.501813 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bk6t\" (UniqueName: \"kubernetes.io/projected/1c1a204e-dbad-4479-854d-98812e592f54-kube-api-access-7bk6t\") pod \"cert-manager-webhook-687f57d79b-qlcb7\" (UID: \"1c1a204e-dbad-4479-854d-98812e592f54\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.501903 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fckb\" (UniqueName: \"kubernetes.io/projected/dd1688d7-4085-4be3-9d16-3d212a30c5cb-kube-api-access-5fckb\") pod \"cert-manager-cainjector-cf98fcc89-4hxcx\" (UID: \"dd1688d7-4085-4be3-9d16-3d212a30c5cb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.603397 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bk6t\" (UniqueName: \"kubernetes.io/projected/1c1a204e-dbad-4479-854d-98812e592f54-kube-api-access-7bk6t\") pod \"cert-manager-webhook-687f57d79b-qlcb7\" (UID: \"1c1a204e-dbad-4479-854d-98812e592f54\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.603478 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fckb\" (UniqueName: \"kubernetes.io/projected/dd1688d7-4085-4be3-9d16-3d212a30c5cb-kube-api-access-5fckb\") pod \"cert-manager-cainjector-cf98fcc89-4hxcx\" (UID: \"dd1688d7-4085-4be3-9d16-3d212a30c5cb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.603537 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv5d7\" (UniqueName: \"kubernetes.io/projected/961f5491-5692-4052-a7de-b05dcf68274b-kube-api-access-vv5d7\") pod \"cert-manager-858654f9db-l5qtb\" (UID: \"961f5491-5692-4052-a7de-b05dcf68274b\") " pod="cert-manager/cert-manager-858654f9db-l5qtb" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.635847 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bk6t\" (UniqueName: \"kubernetes.io/projected/1c1a204e-dbad-4479-854d-98812e592f54-kube-api-access-7bk6t\") pod \"cert-manager-webhook-687f57d79b-qlcb7\" (UID: \"1c1a204e-dbad-4479-854d-98812e592f54\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.636031 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv5d7\" (UniqueName: \"kubernetes.io/projected/961f5491-5692-4052-a7de-b05dcf68274b-kube-api-access-vv5d7\") pod \"cert-manager-858654f9db-l5qtb\" (UID: \"961f5491-5692-4052-a7de-b05dcf68274b\") " pod="cert-manager/cert-manager-858654f9db-l5qtb" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.643163 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fckb\" (UniqueName: \"kubernetes.io/projected/dd1688d7-4085-4be3-9d16-3d212a30c5cb-kube-api-access-5fckb\") pod \"cert-manager-cainjector-cf98fcc89-4hxcx\" (UID: \"dd1688d7-4085-4be3-9d16-3d212a30c5cb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.708520 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.721015 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-l5qtb" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.729320 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.942236 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx"] Jan 22 10:48:27 crc kubenswrapper[4975]: I0122 10:48:27.957242 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:48:28 crc kubenswrapper[4975]: W0122 10:48:28.209934 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961f5491_5692_4052_a7de_b05dcf68274b.slice/crio-67b7330f1355ff421c1245ab03b5de2921554303151a615688e82ab3f25a2e85 WatchSource:0}: Error finding container 67b7330f1355ff421c1245ab03b5de2921554303151a615688e82ab3f25a2e85: Status 404 returned error can't find the container with id 67b7330f1355ff421c1245ab03b5de2921554303151a615688e82ab3f25a2e85 Jan 22 10:48:28 crc kubenswrapper[4975]: I0122 10:48:28.210946 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-l5qtb"] Jan 22 10:48:28 crc kubenswrapper[4975]: I0122 10:48:28.218471 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" event={"ID":"dd1688d7-4085-4be3-9d16-3d212a30c5cb","Type":"ContainerStarted","Data":"336630be85c81812e8f58933564dc7d862dc4865b647818c0174c6039cb77986"} Jan 22 10:48:28 crc kubenswrapper[4975]: I0122 10:48:28.219233 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlcb7"] Jan 22 10:48:28 crc kubenswrapper[4975]: I0122 10:48:28.219354 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-l5qtb" event={"ID":"961f5491-5692-4052-a7de-b05dcf68274b","Type":"ContainerStarted","Data":"67b7330f1355ff421c1245ab03b5de2921554303151a615688e82ab3f25a2e85"} Jan 22 10:48:28 crc kubenswrapper[4975]: W0122 10:48:28.231989 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c1a204e_dbad_4479_854d_98812e592f54.slice/crio-56654c319e09991f8bf2469d8fdcd80fbbb8828799e580f659f7db5dd1b69e34 WatchSource:0}: Error finding container 56654c319e09991f8bf2469d8fdcd80fbbb8828799e580f659f7db5dd1b69e34: Status 404 returned error can't find the container with id 56654c319e09991f8bf2469d8fdcd80fbbb8828799e580f659f7db5dd1b69e34 Jan 22 10:48:29 crc kubenswrapper[4975]: I0122 10:48:29.227327 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" event={"ID":"1c1a204e-dbad-4479-854d-98812e592f54","Type":"ContainerStarted","Data":"56654c319e09991f8bf2469d8fdcd80fbbb8828799e580f659f7db5dd1b69e34"} Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.255497 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" event={"ID":"1c1a204e-dbad-4479-854d-98812e592f54","Type":"ContainerStarted","Data":"7b8a00cf4fb0738142d6486b58d0d687a4f5c557e09114b7edb2085389f86656"} Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.255842 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.257247 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-l5qtb" event={"ID":"961f5491-5692-4052-a7de-b05dcf68274b","Type":"ContainerStarted","Data":"080cbc8fe78be56d726972446842af6f9ba3618749a382825dace3a2167ea01a"} Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.258690 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" event={"ID":"dd1688d7-4085-4be3-9d16-3d212a30c5cb","Type":"ContainerStarted","Data":"e0d124f4a72672ffe5a788150255929679abf99246da41a4b8afd391d531b4cb"} Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.281431 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" podStartSLOduration=1.9738970679999999 podStartE2EDuration="7.281403159s" podCreationTimestamp="2026-01-22 10:48:27 +0000 UTC" firstStartedPulling="2026-01-22 10:48:28.233711656 +0000 UTC m=+700.891747766" lastFinishedPulling="2026-01-22 10:48:33.541217717 +0000 UTC m=+706.199253857" observedRunningTime="2026-01-22 10:48:34.275863693 +0000 UTC m=+706.933899843" watchObservedRunningTime="2026-01-22 10:48:34.281403159 +0000 UTC m=+706.939439299" Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.297908 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-4hxcx" podStartSLOduration=1.808399643 podStartE2EDuration="7.297882782s" podCreationTimestamp="2026-01-22 10:48:27 +0000 UTC" firstStartedPulling="2026-01-22 10:48:27.956674109 +0000 UTC m=+700.614710229" lastFinishedPulling="2026-01-22 10:48:33.446157218 +0000 UTC m=+706.104193368" observedRunningTime="2026-01-22 10:48:34.296219056 +0000 UTC m=+706.954255196" watchObservedRunningTime="2026-01-22 10:48:34.297882782 +0000 UTC m=+706.955918932" Jan 22 10:48:34 crc kubenswrapper[4975]: I0122 10:48:34.369111 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-l5qtb" podStartSLOduration=2.133367447 podStartE2EDuration="7.369081991s" podCreationTimestamp="2026-01-22 10:48:27 +0000 UTC" firstStartedPulling="2026-01-22 10:48:28.212181273 +0000 UTC m=+700.870217443" lastFinishedPulling="2026-01-22 10:48:33.447895857 +0000 UTC m=+706.105931987" observedRunningTime="2026-01-22 10:48:34.357195157 +0000 UTC m=+707.015231307" watchObservedRunningTime="2026-01-22 10:48:34.369081991 +0000 UTC m=+707.027118131" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.184312 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" podUID="588b9cff-3186-4fc2-9498-90fd2272fb66" containerName="registry" containerID="cri-o://78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b" gracePeriod=30 Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.623140 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763564 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-certificates\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763724 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763788 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/588b9cff-3186-4fc2-9498-90fd2272fb66-installation-pull-secrets\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763840 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l48b\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-kube-api-access-4l48b\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763866 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-trusted-ca\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763906 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-bound-sa-token\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.763984 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-tls\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.764026 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/588b9cff-3186-4fc2-9498-90fd2272fb66-ca-trust-extracted\") pod \"588b9cff-3186-4fc2-9498-90fd2272fb66\" (UID: \"588b9cff-3186-4fc2-9498-90fd2272fb66\") " Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.765113 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.765257 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.772910 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.773538 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-kube-api-access-4l48b" (OuterVolumeSpecName: "kube-api-access-4l48b") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "kube-api-access-4l48b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.774110 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588b9cff-3186-4fc2-9498-90fd2272fb66-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.776561 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.777514 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.792741 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588b9cff-3186-4fc2-9498-90fd2272fb66-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "588b9cff-3186-4fc2-9498-90fd2272fb66" (UID: "588b9cff-3186-4fc2-9498-90fd2272fb66"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.865967 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l48b\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-kube-api-access-4l48b\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.866026 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.866049 4975 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.866069 4975 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.866086 4975 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/588b9cff-3186-4fc2-9498-90fd2272fb66-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.866103 4975 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/588b9cff-3186-4fc2-9498-90fd2272fb66-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:37 crc kubenswrapper[4975]: I0122 10:48:37.866119 4975 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/588b9cff-3186-4fc2-9498-90fd2272fb66-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.289461 4975 generic.go:334] "Generic (PLEG): container finished" podID="588b9cff-3186-4fc2-9498-90fd2272fb66" containerID="78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b" exitCode=0 Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.289511 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" event={"ID":"588b9cff-3186-4fc2-9498-90fd2272fb66","Type":"ContainerDied","Data":"78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b"} Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.289548 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" event={"ID":"588b9cff-3186-4fc2-9498-90fd2272fb66","Type":"ContainerDied","Data":"3cafaaea2a4f16d683453086c7e56462fefa2d9cd9b0fe8b63a4e95b9043c232"} Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.289557 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-94jvp" Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.289572 4975 scope.go:117] "RemoveContainer" containerID="78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b" Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.315137 4975 scope.go:117] "RemoveContainer" containerID="78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b" Jan 22 10:48:38 crc kubenswrapper[4975]: E0122 10:48:38.315952 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b\": container with ID starting with 78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b not found: ID does not exist" containerID="78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b" Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.316058 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b"} err="failed to get container status \"78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b\": rpc error: code = NotFound desc = could not find container \"78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b\": container with ID starting with 78598a43b45831b0cff7f91bb614c16f543d94712889b13883eb276e19191e3b not found: ID does not exist" Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.344400 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-94jvp"] Jan 22 10:48:38 crc kubenswrapper[4975]: I0122 10:48:38.350612 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-94jvp"] Jan 22 10:48:39 crc kubenswrapper[4975]: I0122 10:48:39.767491 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588b9cff-3186-4fc2-9498-90fd2272fb66" path="/var/lib/kubelet/pods/588b9cff-3186-4fc2-9498-90fd2272fb66/volumes" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.448167 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t5gcx"] Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.450626 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="sbdb" containerID="cri-o://fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.450733 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="nbdb" containerID="cri-o://a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.452841 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="northd" containerID="cri-o://9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.452890 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.453016 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-node" containerID="cri-o://954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.453099 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-controller" containerID="cri-o://4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.453099 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-acl-logging" containerID="cri-o://8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.499793 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" containerID="cri-o://c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" gracePeriod=30 Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.708718 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e is running failed: container process not found" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.708818 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 is running failed: container process not found" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.709121 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 is running failed: container process not found" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.709420 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 is running failed: container process not found" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.709486 4975 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="sbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.709404 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e is running failed: container process not found" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.709836 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e is running failed: container process not found" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.709875 4975 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="nbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.765139 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/3.log" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.768285 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovn-acl-logging/0.log" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.768943 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovn-controller/0.log" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.769510 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.841970 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gpwt7"] Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842301 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kubecfg-setup" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842353 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kubecfg-setup" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842375 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588b9cff-3186-4fc2-9498-90fd2272fb66" containerName="registry" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842390 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="588b9cff-3186-4fc2-9498-90fd2272fb66" containerName="registry" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842414 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842427 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842447 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842459 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842474 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842486 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842499 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="northd" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842511 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="northd" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842524 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842535 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842554 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842566 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842586 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-node" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842600 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-node" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842620 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="nbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842633 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="nbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842652 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-acl-logging" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842664 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-acl-logging" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.842684 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="sbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842695 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="sbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842863 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842880 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842894 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842912 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="northd" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842931 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-node" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842948 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="nbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842967 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="588b9cff-3186-4fc2-9498-90fd2272fb66" containerName="registry" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842983 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovn-acl-logging" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.842996 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.843009 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.843024 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.843040 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="sbdb" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.843209 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.843223 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.843421 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: E0122 10:48:41.843578 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.843592 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerName="ovnkube-controller" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.846276 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938620 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-netns\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938672 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-env-overrides\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938698 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-script-lib\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938718 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-etc-openvswitch\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938742 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938773 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-config\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938801 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovn-node-metrics-cert\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938823 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-netd\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938877 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.938913 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.939137 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mwgl\" (UniqueName: \"kubernetes.io/projected/c1de894a-1cf5-47fa-9fc5-0a52783fe152-kube-api-access-9mwgl\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.939375 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.939764 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.939805 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.939832 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940300 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940425 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-systemd-units\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940529 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940735 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-var-lib-openvswitch\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940777 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-openvswitch\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940860 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-log-socket\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940885 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-slash\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940903 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-systemd\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940927 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-node-log\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940946 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-ovn-kubernetes\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.940986 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-ovn\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941006 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-kubelet\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941026 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-slash" (OuterVolumeSpecName: "host-slash") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941037 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-bin\") pod \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\" (UID: \"c1de894a-1cf5-47fa-9fc5-0a52783fe152\") " Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941062 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941168 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-systemd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941202 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-cni-netd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941253 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-kubelet\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941286 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-var-lib-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941317 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-cni-bin\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941392 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhdd\" (UniqueName: \"kubernetes.io/projected/0fbb7839-d201-47bc-bf4f-74e0eeb98937-kube-api-access-kbhdd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941425 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-slash\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941465 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941297 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941374 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-node-log" (OuterVolumeSpecName: "node-log") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941381 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941425 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941482 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941508 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941540 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-log-socket" (OuterVolumeSpecName: "log-socket") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941554 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-etc-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941689 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-ovn\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941746 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovn-node-metrics-cert\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941779 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovnkube-script-lib\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941843 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovnkube-config\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941906 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-env-overrides\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941943 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-run-netns\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.941983 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-log-socket\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942027 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942078 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-systemd-units\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942123 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-run-ovn-kubernetes\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942156 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-node-log\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942282 4975 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942304 4975 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942325 4975 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942374 4975 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942392 4975 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942410 4975 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942429 4975 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942446 4975 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942461 4975 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942479 4975 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942498 4975 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942514 4975 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942531 4975 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942548 4975 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942564 4975 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942580 4975 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.942599 4975 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.947216 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1de894a-1cf5-47fa-9fc5-0a52783fe152-kube-api-access-9mwgl" (OuterVolumeSpecName: "kube-api-access-9mwgl") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "kube-api-access-9mwgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.947925 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:48:41 crc kubenswrapper[4975]: I0122 10:48:41.957649 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c1de894a-1cf5-47fa-9fc5-0a52783fe152" (UID: "c1de894a-1cf5-47fa-9fc5-0a52783fe152"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043045 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-kubelet\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043117 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-var-lib-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043151 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-cni-bin\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043190 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhdd\" (UniqueName: \"kubernetes.io/projected/0fbb7839-d201-47bc-bf4f-74e0eeb98937-kube-api-access-kbhdd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043192 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-kubelet\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043228 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-slash\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043267 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043263 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-var-lib-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043304 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-etc-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043370 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-ovn\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043383 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-cni-bin\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043404 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovnkube-script-lib\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043443 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovn-node-metrics-cert\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043419 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043511 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-slash\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043481 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovnkube-config\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043596 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-ovn\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043650 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-env-overrides\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043677 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-etc-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.045183 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-run-netns\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.045469 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovnkube-script-lib\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.045466 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovnkube-config\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.043687 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-run-netns\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.045949 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-log-socket\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.045991 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046037 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-systemd-units\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046080 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-run-ovn-kubernetes\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046113 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-node-log\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046130 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-log-socket\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046169 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-systemd-units\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046176 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-systemd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046202 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-openvswitch\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046229 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-run-ovn-kubernetes\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046307 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-run-systemd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046379 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-node-log\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046390 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-cni-netd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046433 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbb7839-d201-47bc-bf4f-74e0eeb98937-host-cni-netd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046505 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mwgl\" (UniqueName: \"kubernetes.io/projected/c1de894a-1cf5-47fa-9fc5-0a52783fe152-kube-api-access-9mwgl\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046542 4975 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1de894a-1cf5-47fa-9fc5-0a52783fe152-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.046570 4975 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1de894a-1cf5-47fa-9fc5-0a52783fe152-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.047810 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fbb7839-d201-47bc-bf4f-74e0eeb98937-env-overrides\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.049080 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fbb7839-d201-47bc-bf4f-74e0eeb98937-ovn-node-metrics-cert\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.073946 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhdd\" (UniqueName: \"kubernetes.io/projected/0fbb7839-d201-47bc-bf4f-74e0eeb98937-kube-api-access-kbhdd\") pod \"ovnkube-node-gpwt7\" (UID: \"0fbb7839-d201-47bc-bf4f-74e0eeb98937\") " pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.167537 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.316290 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/2.log" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.317472 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/1.log" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.317557 4975 generic.go:334] "Generic (PLEG): container finished" podID="9fcb4bcb-0a79-4420-bb51-e4585d3306a9" containerID="cb3fc758f4e3dafef1c65a6af58250e64c986ea64452a9ca79134a5387a92fb0" exitCode=2 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.317682 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerDied","Data":"cb3fc758f4e3dafef1c65a6af58250e64c986ea64452a9ca79134a5387a92fb0"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.317755 4975 scope.go:117] "RemoveContainer" containerID="fe062dafe3183de434965592f487d943c567d90e5c59b5de1470b7676ddf3608" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.318200 4975 scope.go:117] "RemoveContainer" containerID="cb3fc758f4e3dafef1c65a6af58250e64c986ea64452a9ca79134a5387a92fb0" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.318397 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qmklh_openshift-multus(9fcb4bcb-0a79-4420-bb51-e4585d3306a9)\"" pod="openshift-multus/multus-qmklh" podUID="9fcb4bcb-0a79-4420-bb51-e4585d3306a9" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.319576 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"abbae2d2ca70733f116729ef06d13c9ad10743569a1f6f4c32fc225731cae029"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.325924 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovnkube-controller/3.log" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.332165 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovn-acl-logging/0.log" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.332969 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t5gcx_c1de894a-1cf5-47fa-9fc5-0a52783fe152/ovn-controller/0.log" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334023 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334053 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334061 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334070 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334078 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334086 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334094 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" exitCode=143 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334102 4975 generic.go:334] "Generic (PLEG): container finished" podID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" exitCode=143 Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334147 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334178 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334193 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334208 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334241 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334251 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334262 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334272 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334309 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334316 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334321 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334341 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334347 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334353 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334358 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334363 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334371 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334381 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334387 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334393 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334400 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334405 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334410 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334415 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334420 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334425 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334430 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334436 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334444 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334450 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334456 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334461 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334466 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334494 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334501 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334506 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334511 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334516 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334523 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" event={"ID":"c1de894a-1cf5-47fa-9fc5-0a52783fe152","Type":"ContainerDied","Data":"609a0c4b49a23bf035b02035014f26d5697b280d3b976f765ee49364d080c3bc"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334531 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334557 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334563 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334568 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334574 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334579 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334584 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334589 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334595 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334599 4975 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.334693 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t5gcx" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.359458 4975 scope.go:117] "RemoveContainer" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.437127 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.440113 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t5gcx"] Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.447170 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t5gcx"] Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.464351 4975 scope.go:117] "RemoveContainer" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.486732 4975 scope.go:117] "RemoveContainer" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.505484 4975 scope.go:117] "RemoveContainer" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.524963 4975 scope.go:117] "RemoveContainer" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.547183 4975 scope.go:117] "RemoveContainer" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.568307 4975 scope.go:117] "RemoveContainer" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.626070 4975 scope.go:117] "RemoveContainer" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.646864 4975 scope.go:117] "RemoveContainer" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.666767 4975 scope.go:117] "RemoveContainer" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.667241 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": container with ID starting with c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d not found: ID does not exist" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.667280 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} err="failed to get container status \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": rpc error: code = NotFound desc = could not find container \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": container with ID starting with c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.667306 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.667745 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": container with ID starting with 1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369 not found: ID does not exist" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.667794 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} err="failed to get container status \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": rpc error: code = NotFound desc = could not find container \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": container with ID starting with 1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.667829 4975 scope.go:117] "RemoveContainer" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.668153 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": container with ID starting with fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 not found: ID does not exist" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.668179 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} err="failed to get container status \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": rpc error: code = NotFound desc = could not find container \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": container with ID starting with fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.668199 4975 scope.go:117] "RemoveContainer" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.668767 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": container with ID starting with a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e not found: ID does not exist" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.668806 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} err="failed to get container status \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": rpc error: code = NotFound desc = could not find container \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": container with ID starting with a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.668834 4975 scope.go:117] "RemoveContainer" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.669209 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": container with ID starting with 9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13 not found: ID does not exist" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.669270 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} err="failed to get container status \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": rpc error: code = NotFound desc = could not find container \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": container with ID starting with 9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.669308 4975 scope.go:117] "RemoveContainer" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.669716 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": container with ID starting with 19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e not found: ID does not exist" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.669740 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} err="failed to get container status \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": rpc error: code = NotFound desc = could not find container \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": container with ID starting with 19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.669754 4975 scope.go:117] "RemoveContainer" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.670060 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": container with ID starting with 954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae not found: ID does not exist" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.670102 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} err="failed to get container status \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": rpc error: code = NotFound desc = could not find container \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": container with ID starting with 954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.670133 4975 scope.go:117] "RemoveContainer" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.670477 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": container with ID starting with 8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57 not found: ID does not exist" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.670499 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} err="failed to get container status \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": rpc error: code = NotFound desc = could not find container \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": container with ID starting with 8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.670535 4975 scope.go:117] "RemoveContainer" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.670826 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": container with ID starting with 4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641 not found: ID does not exist" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.670865 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} err="failed to get container status \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": rpc error: code = NotFound desc = could not find container \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": container with ID starting with 4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.670891 4975 scope.go:117] "RemoveContainer" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" Jan 22 10:48:42 crc kubenswrapper[4975]: E0122 10:48:42.671186 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": container with ID starting with 4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79 not found: ID does not exist" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.671208 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} err="failed to get container status \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": rpc error: code = NotFound desc = could not find container \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": container with ID starting with 4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.671221 4975 scope.go:117] "RemoveContainer" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.671569 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} err="failed to get container status \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": rpc error: code = NotFound desc = could not find container \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": container with ID starting with c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.671588 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.671932 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} err="failed to get container status \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": rpc error: code = NotFound desc = could not find container \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": container with ID starting with 1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.672070 4975 scope.go:117] "RemoveContainer" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.672434 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} err="failed to get container status \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": rpc error: code = NotFound desc = could not find container \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": container with ID starting with fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.672453 4975 scope.go:117] "RemoveContainer" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.672717 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} err="failed to get container status \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": rpc error: code = NotFound desc = could not find container \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": container with ID starting with a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.672752 4975 scope.go:117] "RemoveContainer" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673093 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} err="failed to get container status \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": rpc error: code = NotFound desc = could not find container \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": container with ID starting with 9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673112 4975 scope.go:117] "RemoveContainer" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673367 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} err="failed to get container status \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": rpc error: code = NotFound desc = could not find container \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": container with ID starting with 19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673384 4975 scope.go:117] "RemoveContainer" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673758 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} err="failed to get container status \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": rpc error: code = NotFound desc = could not find container \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": container with ID starting with 954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673770 4975 scope.go:117] "RemoveContainer" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673977 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} err="failed to get container status \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": rpc error: code = NotFound desc = could not find container \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": container with ID starting with 8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.673990 4975 scope.go:117] "RemoveContainer" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674197 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} err="failed to get container status \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": rpc error: code = NotFound desc = could not find container \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": container with ID starting with 4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674208 4975 scope.go:117] "RemoveContainer" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674489 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} err="failed to get container status \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": rpc error: code = NotFound desc = could not find container \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": container with ID starting with 4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674504 4975 scope.go:117] "RemoveContainer" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674721 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} err="failed to get container status \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": rpc error: code = NotFound desc = could not find container \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": container with ID starting with c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674732 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674935 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} err="failed to get container status \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": rpc error: code = NotFound desc = could not find container \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": container with ID starting with 1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.674945 4975 scope.go:117] "RemoveContainer" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675207 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} err="failed to get container status \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": rpc error: code = NotFound desc = could not find container \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": container with ID starting with fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675222 4975 scope.go:117] "RemoveContainer" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675502 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} err="failed to get container status \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": rpc error: code = NotFound desc = could not find container \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": container with ID starting with a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675520 4975 scope.go:117] "RemoveContainer" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675731 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} err="failed to get container status \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": rpc error: code = NotFound desc = could not find container \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": container with ID starting with 9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675745 4975 scope.go:117] "RemoveContainer" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675953 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} err="failed to get container status \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": rpc error: code = NotFound desc = could not find container \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": container with ID starting with 19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.675964 4975 scope.go:117] "RemoveContainer" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.676166 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} err="failed to get container status \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": rpc error: code = NotFound desc = could not find container \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": container with ID starting with 954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.676178 4975 scope.go:117] "RemoveContainer" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.676444 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} err="failed to get container status \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": rpc error: code = NotFound desc = could not find container \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": container with ID starting with 8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.676460 4975 scope.go:117] "RemoveContainer" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.676792 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} err="failed to get container status \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": rpc error: code = NotFound desc = could not find container \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": container with ID starting with 4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.676859 4975 scope.go:117] "RemoveContainer" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.677281 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} err="failed to get container status \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": rpc error: code = NotFound desc = could not find container \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": container with ID starting with 4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.677321 4975 scope.go:117] "RemoveContainer" containerID="c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.677705 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d"} err="failed to get container status \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": rpc error: code = NotFound desc = could not find container \"c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d\": container with ID starting with c7bb8da0e65901edf9a2fdeafbb977d3a385d58f9184510be24037bbdcaf3b3d not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.677738 4975 scope.go:117] "RemoveContainer" containerID="1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.678362 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369"} err="failed to get container status \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": rpc error: code = NotFound desc = could not find container \"1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369\": container with ID starting with 1bf823772c79c0097bcc28f6bddea3a4bb19db5dcc7f00e350e66817cc17f369 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.678401 4975 scope.go:117] "RemoveContainer" containerID="fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.678729 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637"} err="failed to get container status \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": rpc error: code = NotFound desc = could not find container \"fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637\": container with ID starting with fea32261f6c39103aab476431431bf6f7ca23ef0f98a2b384705b341d89d6637 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.678761 4975 scope.go:117] "RemoveContainer" containerID="a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.679098 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e"} err="failed to get container status \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": rpc error: code = NotFound desc = could not find container \"a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e\": container with ID starting with a681ad952765e3cba64272338744be7c953b1630741db3eaf46315494784621e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.679134 4975 scope.go:117] "RemoveContainer" containerID="9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.680525 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13"} err="failed to get container status \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": rpc error: code = NotFound desc = could not find container \"9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13\": container with ID starting with 9331715c108aca10e48375d033d75b5f73c2ab148b0dffb2d92542d51b4a3e13 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.680682 4975 scope.go:117] "RemoveContainer" containerID="19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.681179 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e"} err="failed to get container status \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": rpc error: code = NotFound desc = could not find container \"19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e\": container with ID starting with 19a1611988fa0a47bee2792ab6ffc45f20b3eedcc1d4e27ecba797dc01d7815e not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.681215 4975 scope.go:117] "RemoveContainer" containerID="954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.681631 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae"} err="failed to get container status \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": rpc error: code = NotFound desc = could not find container \"954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae\": container with ID starting with 954ed475e5de1c909b712ac091376df34736038cae0c49f1dfe0ed49b8d3deae not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.681667 4975 scope.go:117] "RemoveContainer" containerID="8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.682298 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57"} err="failed to get container status \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": rpc error: code = NotFound desc = could not find container \"8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57\": container with ID starting with 8e8cafbde14fd0e6f9770da957130e7e9c9d3a1976760ae15f9fea8505ca7f57 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.682352 4975 scope.go:117] "RemoveContainer" containerID="4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.682808 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641"} err="failed to get container status \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": rpc error: code = NotFound desc = could not find container \"4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641\": container with ID starting with 4736674f6366c38881923df9f5fc5456fb08d5b87378d1034343d2b93f9b9641 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.682844 4975 scope.go:117] "RemoveContainer" containerID="4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.683282 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79"} err="failed to get container status \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": rpc error: code = NotFound desc = could not find container \"4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79\": container with ID starting with 4a55fdadab2f9b8509560f213d413bb41581c8dc64d5f3b925128d80a643eb79 not found: ID does not exist" Jan 22 10:48:42 crc kubenswrapper[4975]: I0122 10:48:42.732510 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qlcb7" Jan 22 10:48:43 crc kubenswrapper[4975]: I0122 10:48:43.340637 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/2.log" Jan 22 10:48:43 crc kubenswrapper[4975]: I0122 10:48:43.342240 4975 generic.go:334] "Generic (PLEG): container finished" podID="0fbb7839-d201-47bc-bf4f-74e0eeb98937" containerID="c75327bc905f59dcfe821d66d620b7f7f357b085adb35a1e54e2839212c037e3" exitCode=0 Jan 22 10:48:43 crc kubenswrapper[4975]: I0122 10:48:43.342317 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerDied","Data":"c75327bc905f59dcfe821d66d620b7f7f357b085adb35a1e54e2839212c037e3"} Jan 22 10:48:43 crc kubenswrapper[4975]: I0122 10:48:43.766445 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1de894a-1cf5-47fa-9fc5-0a52783fe152" path="/var/lib/kubelet/pods/c1de894a-1cf5-47fa-9fc5-0a52783fe152/volumes" Jan 22 10:48:44 crc kubenswrapper[4975]: I0122 10:48:44.353123 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"9bd500c12dc38c2c60f63462325e0772425bf19c186cdca7c084d2baa8037297"} Jan 22 10:48:44 crc kubenswrapper[4975]: I0122 10:48:44.353170 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"5e478c167952cb6851a798caf15867482770e11b98d21fb71f1aee5919b3f057"} Jan 22 10:48:44 crc kubenswrapper[4975]: I0122 10:48:44.353187 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"c5f3d5560f18999d14938b6f74ef7041d522bf454344eb6c2e3c1d488b615230"} Jan 22 10:48:45 crc kubenswrapper[4975]: I0122 10:48:45.368964 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"d9b8d0513e1839e054cfaf0527a2cc4d1af10b5afa32ecfbd79a0b9a3fa9e6ce"} Jan 22 10:48:45 crc kubenswrapper[4975]: I0122 10:48:45.370313 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"88af42693364e26486933c52d9ba0e46a1ca1005485d91329c6af59619cd4b3c"} Jan 22 10:48:45 crc kubenswrapper[4975]: I0122 10:48:45.370579 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"ec85a33f0dc9173b8d8b71fc4c468d65fbb2f144295b8a7443bf79034dae5099"} Jan 22 10:48:47 crc kubenswrapper[4975]: I0122 10:48:47.391127 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"4a38888b813a61ea8b0f614fe5667454da0f97d3edfa0e5416d3375c6b8a6de1"} Jan 22 10:48:50 crc kubenswrapper[4975]: I0122 10:48:50.425580 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" event={"ID":"0fbb7839-d201-47bc-bf4f-74e0eeb98937","Type":"ContainerStarted","Data":"b7f8c9fdfc90c9d92d2db6aed0f593d7c723a72e16b192d44e879c79ca1ca721"} Jan 22 10:48:50 crc kubenswrapper[4975]: I0122 10:48:50.426070 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:50 crc kubenswrapper[4975]: I0122 10:48:50.426087 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:50 crc kubenswrapper[4975]: I0122 10:48:50.456875 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:50 crc kubenswrapper[4975]: I0122 10:48:50.463640 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" podStartSLOduration=9.463624903 podStartE2EDuration="9.463624903s" podCreationTimestamp="2026-01-22 10:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:48:50.460448815 +0000 UTC m=+723.118484935" watchObservedRunningTime="2026-01-22 10:48:50.463624903 +0000 UTC m=+723.121661023" Jan 22 10:48:51 crc kubenswrapper[4975]: I0122 10:48:51.433176 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:51 crc kubenswrapper[4975]: I0122 10:48:51.478575 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:48:54 crc kubenswrapper[4975]: I0122 10:48:54.755896 4975 scope.go:117] "RemoveContainer" containerID="cb3fc758f4e3dafef1c65a6af58250e64c986ea64452a9ca79134a5387a92fb0" Jan 22 10:48:54 crc kubenswrapper[4975]: E0122 10:48:54.756848 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qmklh_openshift-multus(9fcb4bcb-0a79-4420-bb51-e4585d3306a9)\"" pod="openshift-multus/multus-qmklh" podUID="9fcb4bcb-0a79-4420-bb51-e4585d3306a9" Jan 22 10:48:57 crc kubenswrapper[4975]: I0122 10:48:57.436255 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:48:57 crc kubenswrapper[4975]: I0122 10:48:57.436716 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:49:07 crc kubenswrapper[4975]: I0122 10:49:07.759183 4975 scope.go:117] "RemoveContainer" containerID="cb3fc758f4e3dafef1c65a6af58250e64c986ea64452a9ca79134a5387a92fb0" Jan 22 10:49:09 crc kubenswrapper[4975]: I0122 10:49:09.574709 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmklh_9fcb4bcb-0a79-4420-bb51-e4585d3306a9/kube-multus/2.log" Jan 22 10:49:09 crc kubenswrapper[4975]: I0122 10:49:09.576589 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmklh" event={"ID":"9fcb4bcb-0a79-4420-bb51-e4585d3306a9","Type":"ContainerStarted","Data":"d62ca04399dfe188fcf68a756396e1f03040c6aa1d00615fe8bf99d2ffac85ac"} Jan 22 10:49:13 crc kubenswrapper[4975]: I0122 10:49:13.096153 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gpwt7" Jan 22 10:49:27 crc kubenswrapper[4975]: I0122 10:49:27.072061 4975 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 10:49:27 crc kubenswrapper[4975]: I0122 10:49:27.436530 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:49:27 crc kubenswrapper[4975]: I0122 10:49:27.436624 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.242090 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v"] Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.243833 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.245920 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.253102 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v"] Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.308083 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.308502 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwgr\" (UniqueName: \"kubernetes.io/projected/cd03090e-9416-416a-a3cc-81d567aba65c-kube-api-access-xfwgr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.308756 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.409731 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.409835 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.409887 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwgr\" (UniqueName: \"kubernetes.io/projected/cd03090e-9416-416a-a3cc-81d567aba65c-kube-api-access-xfwgr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.410583 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.410688 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.445547 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwgr\" (UniqueName: \"kubernetes.io/projected/cd03090e-9416-416a-a3cc-81d567aba65c-kube-api-access-xfwgr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:29 crc kubenswrapper[4975]: I0122 10:49:29.568462 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:30 crc kubenswrapper[4975]: I0122 10:49:30.127094 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v"] Jan 22 10:49:30 crc kubenswrapper[4975]: W0122 10:49:30.137932 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd03090e_9416_416a_a3cc_81d567aba65c.slice/crio-0f0858bfc12d8f3999a43e58b4d4d6bd0dd45abff05c733beeeab5fbfd3e53fe WatchSource:0}: Error finding container 0f0858bfc12d8f3999a43e58b4d4d6bd0dd45abff05c733beeeab5fbfd3e53fe: Status 404 returned error can't find the container with id 0f0858bfc12d8f3999a43e58b4d4d6bd0dd45abff05c733beeeab5fbfd3e53fe Jan 22 10:49:30 crc kubenswrapper[4975]: I0122 10:49:30.223203 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" event={"ID":"cd03090e-9416-416a-a3cc-81d567aba65c","Type":"ContainerStarted","Data":"0f0858bfc12d8f3999a43e58b4d4d6bd0dd45abff05c733beeeab5fbfd3e53fe"} Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.234500 4975 generic.go:334] "Generic (PLEG): container finished" podID="cd03090e-9416-416a-a3cc-81d567aba65c" containerID="0eb9d95211dc98a6e1075dbe7605088e41967630e3d806488e3418ca259ee603" exitCode=0 Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.234615 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" event={"ID":"cd03090e-9416-416a-a3cc-81d567aba65c","Type":"ContainerDied","Data":"0eb9d95211dc98a6e1075dbe7605088e41967630e3d806488e3418ca259ee603"} Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.477474 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zsd96"] Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.479266 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.489282 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsd96"] Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.538013 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-utilities\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.538179 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2n69\" (UniqueName: \"kubernetes.io/projected/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-kube-api-access-k2n69\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.538424 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-catalog-content\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.639231 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-catalog-content\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.639445 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-utilities\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.639492 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2n69\" (UniqueName: \"kubernetes.io/projected/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-kube-api-access-k2n69\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.640027 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-catalog-content\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.640175 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-utilities\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.676560 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2n69\" (UniqueName: \"kubernetes.io/projected/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-kube-api-access-k2n69\") pod \"redhat-operators-zsd96\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:31 crc kubenswrapper[4975]: I0122 10:49:31.812402 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:32 crc kubenswrapper[4975]: I0122 10:49:32.008686 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsd96"] Jan 22 10:49:32 crc kubenswrapper[4975]: I0122 10:49:32.241280 4975 generic.go:334] "Generic (PLEG): container finished" podID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerID="15efc7ffc862b71cd98f11b78e5197c440fd09740290a3b8386ffe01cbac0803" exitCode=0 Jan 22 10:49:32 crc kubenswrapper[4975]: I0122 10:49:32.241338 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerDied","Data":"15efc7ffc862b71cd98f11b78e5197c440fd09740290a3b8386ffe01cbac0803"} Jan 22 10:49:32 crc kubenswrapper[4975]: I0122 10:49:32.241597 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerStarted","Data":"cae1cde3eb6629ce5abd1aacbbf9f62751b88443fd9847c0f8dec22b1effc916"} Jan 22 10:49:33 crc kubenswrapper[4975]: I0122 10:49:33.249256 4975 generic.go:334] "Generic (PLEG): container finished" podID="cd03090e-9416-416a-a3cc-81d567aba65c" containerID="709b17b87cf08852b7af405c685474d05ee2ebeef2ba4420d39f11a3994342aa" exitCode=0 Jan 22 10:49:33 crc kubenswrapper[4975]: I0122 10:49:33.249306 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" event={"ID":"cd03090e-9416-416a-a3cc-81d567aba65c","Type":"ContainerDied","Data":"709b17b87cf08852b7af405c685474d05ee2ebeef2ba4420d39f11a3994342aa"} Jan 22 10:49:33 crc kubenswrapper[4975]: I0122 10:49:33.252282 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerStarted","Data":"c28c9936a1e5e990738e25c00c2c1a3cd4a6e380b87b8453bcc8f97430b016d7"} Jan 22 10:49:34 crc kubenswrapper[4975]: I0122 10:49:34.262459 4975 generic.go:334] "Generic (PLEG): container finished" podID="cd03090e-9416-416a-a3cc-81d567aba65c" containerID="b09e821628b88201656b5a0d41569b28bd343b6565ac4d0769b7a50f8ef47489" exitCode=0 Jan 22 10:49:34 crc kubenswrapper[4975]: I0122 10:49:34.262597 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" event={"ID":"cd03090e-9416-416a-a3cc-81d567aba65c","Type":"ContainerDied","Data":"b09e821628b88201656b5a0d41569b28bd343b6565ac4d0769b7a50f8ef47489"} Jan 22 10:49:34 crc kubenswrapper[4975]: I0122 10:49:34.265256 4975 generic.go:334] "Generic (PLEG): container finished" podID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerID="c28c9936a1e5e990738e25c00c2c1a3cd4a6e380b87b8453bcc8f97430b016d7" exitCode=0 Jan 22 10:49:34 crc kubenswrapper[4975]: I0122 10:49:34.265384 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerDied","Data":"c28c9936a1e5e990738e25c00c2c1a3cd4a6e380b87b8453bcc8f97430b016d7"} Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.276214 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerStarted","Data":"94b6a88cdce35836736d403d107e62edd9420e3f52508f92ae3c7412acb0cbbb"} Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.306823 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zsd96" podStartSLOduration=1.905650429 podStartE2EDuration="4.306793808s" podCreationTimestamp="2026-01-22 10:49:31 +0000 UTC" firstStartedPulling="2026-01-22 10:49:32.242862871 +0000 UTC m=+764.900898981" lastFinishedPulling="2026-01-22 10:49:34.64400625 +0000 UTC m=+767.302042360" observedRunningTime="2026-01-22 10:49:35.300594288 +0000 UTC m=+767.958630408" watchObservedRunningTime="2026-01-22 10:49:35.306793808 +0000 UTC m=+767.964829958" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.570772 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.698497 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwgr\" (UniqueName: \"kubernetes.io/projected/cd03090e-9416-416a-a3cc-81d567aba65c-kube-api-access-xfwgr\") pod \"cd03090e-9416-416a-a3cc-81d567aba65c\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.698579 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-bundle\") pod \"cd03090e-9416-416a-a3cc-81d567aba65c\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.698750 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-util\") pod \"cd03090e-9416-416a-a3cc-81d567aba65c\" (UID: \"cd03090e-9416-416a-a3cc-81d567aba65c\") " Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.699440 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-bundle" (OuterVolumeSpecName: "bundle") pod "cd03090e-9416-416a-a3cc-81d567aba65c" (UID: "cd03090e-9416-416a-a3cc-81d567aba65c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.707670 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd03090e-9416-416a-a3cc-81d567aba65c-kube-api-access-xfwgr" (OuterVolumeSpecName: "kube-api-access-xfwgr") pod "cd03090e-9416-416a-a3cc-81d567aba65c" (UID: "cd03090e-9416-416a-a3cc-81d567aba65c"). InnerVolumeSpecName "kube-api-access-xfwgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.737371 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-util" (OuterVolumeSpecName: "util") pod "cd03090e-9416-416a-a3cc-81d567aba65c" (UID: "cd03090e-9416-416a-a3cc-81d567aba65c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.800641 4975 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.800671 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwgr\" (UniqueName: \"kubernetes.io/projected/cd03090e-9416-416a-a3cc-81d567aba65c-kube-api-access-xfwgr\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:35 crc kubenswrapper[4975]: I0122 10:49:35.800681 4975 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd03090e-9416-416a-a3cc-81d567aba65c-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:36 crc kubenswrapper[4975]: I0122 10:49:36.288020 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" event={"ID":"cd03090e-9416-416a-a3cc-81d567aba65c","Type":"ContainerDied","Data":"0f0858bfc12d8f3999a43e58b4d4d6bd0dd45abff05c733beeeab5fbfd3e53fe"} Jan 22 10:49:36 crc kubenswrapper[4975]: I0122 10:49:36.288090 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0858bfc12d8f3999a43e58b4d4d6bd0dd45abff05c733beeeab5fbfd3e53fe" Jan 22 10:49:36 crc kubenswrapper[4975]: I0122 10:49:36.288092 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.000496 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-d8nq2"] Jan 22 10:49:39 crc kubenswrapper[4975]: E0122 10:49:39.000950 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="pull" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.000965 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="pull" Jan 22 10:49:39 crc kubenswrapper[4975]: E0122 10:49:39.000978 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="extract" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.000987 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="extract" Jan 22 10:49:39 crc kubenswrapper[4975]: E0122 10:49:39.001006 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="util" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.001015 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="util" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.001134 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd03090e-9416-416a-a3cc-81d567aba65c" containerName="extract" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.001579 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.003705 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.009007 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.011848 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-85qlv" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.021571 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-d8nq2"] Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.041877 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwt7q\" (UniqueName: \"kubernetes.io/projected/1d79f2dc-129c-4d57-94d5-3e2b73dad858-kube-api-access-zwt7q\") pod \"nmstate-operator-646758c888-d8nq2\" (UID: \"1d79f2dc-129c-4d57-94d5-3e2b73dad858\") " pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.143037 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwt7q\" (UniqueName: \"kubernetes.io/projected/1d79f2dc-129c-4d57-94d5-3e2b73dad858-kube-api-access-zwt7q\") pod \"nmstate-operator-646758c888-d8nq2\" (UID: \"1d79f2dc-129c-4d57-94d5-3e2b73dad858\") " pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.158999 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwt7q\" (UniqueName: \"kubernetes.io/projected/1d79f2dc-129c-4d57-94d5-3e2b73dad858-kube-api-access-zwt7q\") pod \"nmstate-operator-646758c888-d8nq2\" (UID: \"1d79f2dc-129c-4d57-94d5-3e2b73dad858\") " pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.332173 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" Jan 22 10:49:39 crc kubenswrapper[4975]: I0122 10:49:39.554313 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-d8nq2"] Jan 22 10:49:39 crc kubenswrapper[4975]: W0122 10:49:39.555746 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d79f2dc_129c_4d57_94d5_3e2b73dad858.slice/crio-50995ceec48e3fe14a31e87af1ff6f3ca8fcccbb984b39c7c07e94cd16bfa341 WatchSource:0}: Error finding container 50995ceec48e3fe14a31e87af1ff6f3ca8fcccbb984b39c7c07e94cd16bfa341: Status 404 returned error can't find the container with id 50995ceec48e3fe14a31e87af1ff6f3ca8fcccbb984b39c7c07e94cd16bfa341 Jan 22 10:49:40 crc kubenswrapper[4975]: I0122 10:49:40.314986 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" event={"ID":"1d79f2dc-129c-4d57-94d5-3e2b73dad858","Type":"ContainerStarted","Data":"50995ceec48e3fe14a31e87af1ff6f3ca8fcccbb984b39c7c07e94cd16bfa341"} Jan 22 10:49:41 crc kubenswrapper[4975]: I0122 10:49:41.813374 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:41 crc kubenswrapper[4975]: I0122 10:49:41.813728 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:41 crc kubenswrapper[4975]: I0122 10:49:41.887217 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:42 crc kubenswrapper[4975]: I0122 10:49:42.375417 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:43 crc kubenswrapper[4975]: I0122 10:49:43.655507 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zsd96"] Jan 22 10:49:44 crc kubenswrapper[4975]: I0122 10:49:44.345365 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zsd96" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="registry-server" containerID="cri-o://94b6a88cdce35836736d403d107e62edd9420e3f52508f92ae3c7412acb0cbbb" gracePeriod=2 Jan 22 10:49:46 crc kubenswrapper[4975]: I0122 10:49:46.359897 4975 generic.go:334] "Generic (PLEG): container finished" podID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerID="94b6a88cdce35836736d403d107e62edd9420e3f52508f92ae3c7412acb0cbbb" exitCode=0 Jan 22 10:49:46 crc kubenswrapper[4975]: I0122 10:49:46.359968 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerDied","Data":"94b6a88cdce35836736d403d107e62edd9420e3f52508f92ae3c7412acb0cbbb"} Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.533230 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.650557 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-catalog-content\") pod \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.650700 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-utilities\") pod \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.650746 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2n69\" (UniqueName: \"kubernetes.io/projected/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-kube-api-access-k2n69\") pod \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\" (UID: \"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402\") " Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.652014 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-utilities" (OuterVolumeSpecName: "utilities") pod "5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" (UID: "5d02bf6a-69fb-4086-be1f-6c4d8a4b9402"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.659823 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-kube-api-access-k2n69" (OuterVolumeSpecName: "kube-api-access-k2n69") pod "5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" (UID: "5d02bf6a-69fb-4086-be1f-6c4d8a4b9402"). InnerVolumeSpecName "kube-api-access-k2n69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.752254 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.752299 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2n69\" (UniqueName: \"kubernetes.io/projected/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-kube-api-access-k2n69\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.871998 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" (UID: "5d02bf6a-69fb-4086-be1f-6c4d8a4b9402"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:49:47 crc kubenswrapper[4975]: I0122 10:49:47.955324 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.388524 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsd96" event={"ID":"5d02bf6a-69fb-4086-be1f-6c4d8a4b9402","Type":"ContainerDied","Data":"cae1cde3eb6629ce5abd1aacbbf9f62751b88443fd9847c0f8dec22b1effc916"} Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.388618 4975 scope.go:117] "RemoveContainer" containerID="94b6a88cdce35836736d403d107e62edd9420e3f52508f92ae3c7412acb0cbbb" Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.388660 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsd96" Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.415980 4975 scope.go:117] "RemoveContainer" containerID="c28c9936a1e5e990738e25c00c2c1a3cd4a6e380b87b8453bcc8f97430b016d7" Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.437967 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zsd96"] Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.446048 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zsd96"] Jan 22 10:49:48 crc kubenswrapper[4975]: I0122 10:49:48.458515 4975 scope.go:117] "RemoveContainer" containerID="15efc7ffc862b71cd98f11b78e5197c440fd09740290a3b8386ffe01cbac0803" Jan 22 10:49:49 crc kubenswrapper[4975]: I0122 10:49:49.766874 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" path="/var/lib/kubelet/pods/5d02bf6a-69fb-4086-be1f-6c4d8a4b9402/volumes" Jan 22 10:49:54 crc kubenswrapper[4975]: I0122 10:49:54.433672 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" event={"ID":"1d79f2dc-129c-4d57-94d5-3e2b73dad858","Type":"ContainerStarted","Data":"27a2e52f389c3dde171ee6e51642725adacd4dfe3274b89d13f02777e84c32df"} Jan 22 10:49:54 crc kubenswrapper[4975]: I0122 10:49:54.467536 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-d8nq2" podStartSLOduration=2.062918555 podStartE2EDuration="16.467508117s" podCreationTimestamp="2026-01-22 10:49:38 +0000 UTC" firstStartedPulling="2026-01-22 10:49:39.558114468 +0000 UTC m=+772.216150588" lastFinishedPulling="2026-01-22 10:49:53.962704 +0000 UTC m=+786.620740150" observedRunningTime="2026-01-22 10:49:54.456183835 +0000 UTC m=+787.114219985" watchObservedRunningTime="2026-01-22 10:49:54.467508117 +0000 UTC m=+787.125544267" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.565363 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hqp5c"] Jan 22 10:49:55 crc kubenswrapper[4975]: E0122 10:49:55.565897 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="extract-content" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.565912 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="extract-content" Jan 22 10:49:55 crc kubenswrapper[4975]: E0122 10:49:55.565924 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="registry-server" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.565933 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="registry-server" Jan 22 10:49:55 crc kubenswrapper[4975]: E0122 10:49:55.565949 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="extract-utilities" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.565959 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="extract-utilities" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.566089 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d02bf6a-69fb-4086-be1f-6c4d8a4b9402" containerName="registry-server" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.566765 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.569236 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sns5p" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.592182 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hqp5c"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.602418 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.603590 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.605817 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.635619 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.646714 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mjdwp"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.647303 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.664820 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwml2\" (UniqueName: \"kubernetes.io/projected/7bcd90cd-87a3-4def-bff8-b31659a68fd2-kube-api-access-bwml2\") pod \"nmstate-metrics-54757c584b-hqp5c\" (UID: \"7bcd90cd-87a3-4def-bff8-b31659a68fd2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.729003 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.730049 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.734933 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.735235 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.736051 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.739057 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g9mhs" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768218 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7th\" (UniqueName: \"kubernetes.io/projected/d10d736d-0f74-4f33-bcbc-31d9a55f3370-kube-api-access-7z7th\") pod \"nmstate-webhook-8474b5b9d8-r9mqr\" (UID: \"d10d736d-0f74-4f33-bcbc-31d9a55f3370\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768455 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-nmstate-lock\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768514 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d10d736d-0f74-4f33-bcbc-31d9a55f3370-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-r9mqr\" (UID: \"d10d736d-0f74-4f33-bcbc-31d9a55f3370\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768539 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-ovs-socket\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768574 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fgqv\" (UniqueName: \"kubernetes.io/projected/0f5b8079-630d-42c9-be23-abb29c55fcc8-kube-api-access-4fgqv\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768659 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-dbus-socket\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.768761 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwml2\" (UniqueName: \"kubernetes.io/projected/7bcd90cd-87a3-4def-bff8-b31659a68fd2-kube-api-access-bwml2\") pod \"nmstate-metrics-54757c584b-hqp5c\" (UID: \"7bcd90cd-87a3-4def-bff8-b31659a68fd2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.797746 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwml2\" (UniqueName: \"kubernetes.io/projected/7bcd90cd-87a3-4def-bff8-b31659a68fd2-kube-api-access-bwml2\") pod \"nmstate-metrics-54757c584b-hqp5c\" (UID: \"7bcd90cd-87a3-4def-bff8-b31659a68fd2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.870761 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7th\" (UniqueName: \"kubernetes.io/projected/d10d736d-0f74-4f33-bcbc-31d9a55f3370-kube-api-access-7z7th\") pod \"nmstate-webhook-8474b5b9d8-r9mqr\" (UID: \"d10d736d-0f74-4f33-bcbc-31d9a55f3370\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.870906 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-nmstate-lock\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.870955 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f35ea3f-ed83-4201-a249-fc379ee0951c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.870994 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d10d736d-0f74-4f33-bcbc-31d9a55f3370-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-r9mqr\" (UID: \"d10d736d-0f74-4f33-bcbc-31d9a55f3370\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871024 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-ovs-socket\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871058 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6b99\" (UniqueName: \"kubernetes.io/projected/8f35ea3f-ed83-4201-a249-fc379ee0951c-kube-api-access-r6b99\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871071 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-nmstate-lock\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871093 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fgqv\" (UniqueName: \"kubernetes.io/projected/0f5b8079-630d-42c9-be23-abb29c55fcc8-kube-api-access-4fgqv\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871186 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f35ea3f-ed83-4201-a249-fc379ee0951c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871368 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-dbus-socket\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871567 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-ovs-socket\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.871777 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0f5b8079-630d-42c9-be23-abb29c55fcc8-dbus-socket\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.874793 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d10d736d-0f74-4f33-bcbc-31d9a55f3370-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-r9mqr\" (UID: \"d10d736d-0f74-4f33-bcbc-31d9a55f3370\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.888945 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7th\" (UniqueName: \"kubernetes.io/projected/d10d736d-0f74-4f33-bcbc-31d9a55f3370-kube-api-access-7z7th\") pod \"nmstate-webhook-8474b5b9d8-r9mqr\" (UID: \"d10d736d-0f74-4f33-bcbc-31d9a55f3370\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.892908 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.898586 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fgqv\" (UniqueName: \"kubernetes.io/projected/0f5b8079-630d-42c9-be23-abb29c55fcc8-kube-api-access-4fgqv\") pod \"nmstate-handler-mjdwp\" (UID: \"0f5b8079-630d-42c9-be23-abb29c55fcc8\") " pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.924544 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-786cfc9d4b-6fj45"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.926260 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.932629 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.938216 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-786cfc9d4b-6fj45"] Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.967845 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.975513 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f35ea3f-ed83-4201-a249-fc379ee0951c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.976256 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6b99\" (UniqueName: \"kubernetes.io/projected/8f35ea3f-ed83-4201-a249-fc379ee0951c-kube-api-access-r6b99\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.976293 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f35ea3f-ed83-4201-a249-fc379ee0951c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.977248 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f35ea3f-ed83-4201-a249-fc379ee0951c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.979994 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f35ea3f-ed83-4201-a249-fc379ee0951c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:55 crc kubenswrapper[4975]: W0122 10:49:55.987829 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5b8079_630d_42c9_be23_abb29c55fcc8.slice/crio-3c4dd27437a8e6098a9fca85dc69c4dc14bbeb6e1a37f69b3fcc4e2296d80e3b WatchSource:0}: Error finding container 3c4dd27437a8e6098a9fca85dc69c4dc14bbeb6e1a37f69b3fcc4e2296d80e3b: Status 404 returned error can't find the container with id 3c4dd27437a8e6098a9fca85dc69c4dc14bbeb6e1a37f69b3fcc4e2296d80e3b Jan 22 10:49:55 crc kubenswrapper[4975]: I0122 10:49:55.993835 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6b99\" (UniqueName: \"kubernetes.io/projected/8f35ea3f-ed83-4201-a249-fc379ee0951c-kube-api-access-r6b99\") pod \"nmstate-console-plugin-7754f76f8b-7z2df\" (UID: \"8f35ea3f-ed83-4201-a249-fc379ee0951c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.046829 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.077865 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxfl\" (UniqueName: \"kubernetes.io/projected/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-kube-api-access-hmxfl\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.077920 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-oauth-config\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.077952 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-trusted-ca-bundle\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.077974 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-service-ca\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.078192 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-config\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.078216 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-oauth-serving-cert\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.078235 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-serving-cert\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.141045 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hqp5c"] Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179073 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxfl\" (UniqueName: \"kubernetes.io/projected/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-kube-api-access-hmxfl\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179133 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-oauth-config\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179187 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-trusted-ca-bundle\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-service-ca\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179251 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-config\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179279 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-oauth-serving-cert\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.179306 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-serving-cert\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.180133 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-service-ca\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.180428 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-config\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.182464 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-oauth-serving-cert\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.182929 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-trusted-ca-bundle\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.187600 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-serving-cert\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.189677 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-console-oauth-config\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.192366 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr"] Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.196166 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxfl\" (UniqueName: \"kubernetes.io/projected/50192661-89ab-430c-9c7f-eb7ff6fc3ac1-kube-api-access-hmxfl\") pod \"console-786cfc9d4b-6fj45\" (UID: \"50192661-89ab-430c-9c7f-eb7ff6fc3ac1\") " pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: W0122 10:49:56.196533 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd10d736d_0f74_4f33_bcbc_31d9a55f3370.slice/crio-514bc63ffb3be56a5a869417b0dea19b717a2c1aa75bb9f6ad486aa3dbf42128 WatchSource:0}: Error finding container 514bc63ffb3be56a5a869417b0dea19b717a2c1aa75bb9f6ad486aa3dbf42128: Status 404 returned error can't find the container with id 514bc63ffb3be56a5a869417b0dea19b717a2c1aa75bb9f6ad486aa3dbf42128 Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.240033 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df"] Jan 22 10:49:56 crc kubenswrapper[4975]: W0122 10:49:56.242897 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f35ea3f_ed83_4201_a249_fc379ee0951c.slice/crio-96d786ac316f6df69958397f762865e7630cfcf9e14cafb72fb4db078f838d72 WatchSource:0}: Error finding container 96d786ac316f6df69958397f762865e7630cfcf9e14cafb72fb4db078f838d72: Status 404 returned error can't find the container with id 96d786ac316f6df69958397f762865e7630cfcf9e14cafb72fb4db078f838d72 Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.277957 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.448657 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" event={"ID":"d10d736d-0f74-4f33-bcbc-31d9a55f3370","Type":"ContainerStarted","Data":"514bc63ffb3be56a5a869417b0dea19b717a2c1aa75bb9f6ad486aa3dbf42128"} Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.451032 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" event={"ID":"7bcd90cd-87a3-4def-bff8-b31659a68fd2","Type":"ContainerStarted","Data":"d57e52c898cb0b040cd90b066fc59d613da72af06b85bc8c014d072fa8a12242"} Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.453642 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" event={"ID":"8f35ea3f-ed83-4201-a249-fc379ee0951c","Type":"ContainerStarted","Data":"96d786ac316f6df69958397f762865e7630cfcf9e14cafb72fb4db078f838d72"} Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.455205 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mjdwp" event={"ID":"0f5b8079-630d-42c9-be23-abb29c55fcc8","Type":"ContainerStarted","Data":"3c4dd27437a8e6098a9fca85dc69c4dc14bbeb6e1a37f69b3fcc4e2296d80e3b"} Jan 22 10:49:56 crc kubenswrapper[4975]: I0122 10:49:56.508388 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-786cfc9d4b-6fj45"] Jan 22 10:49:56 crc kubenswrapper[4975]: W0122 10:49:56.515197 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50192661_89ab_430c_9c7f_eb7ff6fc3ac1.slice/crio-45f3f8f15d757b667be436c4a6f7634c0ebbf72a9fe1160e67aac7b5ec553c4d WatchSource:0}: Error finding container 45f3f8f15d757b667be436c4a6f7634c0ebbf72a9fe1160e67aac7b5ec553c4d: Status 404 returned error can't find the container with id 45f3f8f15d757b667be436c4a6f7634c0ebbf72a9fe1160e67aac7b5ec553c4d Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.436454 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.436764 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.436824 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.437629 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a53aa8903fe7a0da9b7319f66f37bade3977b1a8441ec861195439bca40661fa"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.437694 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://a53aa8903fe7a0da9b7319f66f37bade3977b1a8441ec861195439bca40661fa" gracePeriod=600 Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.462815 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786cfc9d4b-6fj45" event={"ID":"50192661-89ab-430c-9c7f-eb7ff6fc3ac1","Type":"ContainerStarted","Data":"e046d497922b095d08aa1a7107100742d4f1f7481402fbbbcb17bf61722162b5"} Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.462871 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-786cfc9d4b-6fj45" event={"ID":"50192661-89ab-430c-9c7f-eb7ff6fc3ac1","Type":"ContainerStarted","Data":"45f3f8f15d757b667be436c4a6f7634c0ebbf72a9fe1160e67aac7b5ec553c4d"} Jan 22 10:49:57 crc kubenswrapper[4975]: I0122 10:49:57.482735 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-786cfc9d4b-6fj45" podStartSLOduration=2.482717703 podStartE2EDuration="2.482717703s" podCreationTimestamp="2026-01-22 10:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:49:57.475647088 +0000 UTC m=+790.133683208" watchObservedRunningTime="2026-01-22 10:49:57.482717703 +0000 UTC m=+790.140753813" Jan 22 10:49:58 crc kubenswrapper[4975]: I0122 10:49:58.471960 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="a53aa8903fe7a0da9b7319f66f37bade3977b1a8441ec861195439bca40661fa" exitCode=0 Jan 22 10:49:58 crc kubenswrapper[4975]: I0122 10:49:58.472013 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"a53aa8903fe7a0da9b7319f66f37bade3977b1a8441ec861195439bca40661fa"} Jan 22 10:49:58 crc kubenswrapper[4975]: I0122 10:49:58.472719 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"16a708e48cbc5f721c35b9e8acfcd4898f3835379f880a9ee0b3d9dc8b03fa83"} Jan 22 10:49:58 crc kubenswrapper[4975]: I0122 10:49:58.472759 4975 scope.go:117] "RemoveContainer" containerID="acda3fcae4097cbb557b51de8e95309058b3079ea14639b6ff4a35c52220bbdc" Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.479104 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" event={"ID":"8f35ea3f-ed83-4201-a249-fc379ee0951c","Type":"ContainerStarted","Data":"d50ead3a172ae51b40988fab3df43fae674bdc24a3b401a1903c35083a2c8489"} Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.480369 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mjdwp" event={"ID":"0f5b8079-630d-42c9-be23-abb29c55fcc8","Type":"ContainerStarted","Data":"9003bfdddd219b065638b13629dfbcfe2f07a921681ef5b9d145f36754bbdbac"} Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.480844 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.492162 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" event={"ID":"d10d736d-0f74-4f33-bcbc-31d9a55f3370","Type":"ContainerStarted","Data":"1e177c18bf3e7a30aca2f1c1a0d68e0c2bf903e17298287c38fa881f275d1fbb"} Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.492280 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.494246 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" event={"ID":"7bcd90cd-87a3-4def-bff8-b31659a68fd2","Type":"ContainerStarted","Data":"692d0652acb6445dbe56ed2a275fdadd0458fd1d30f9bfe1f875d80d52a02c41"} Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.527294 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" podStartSLOduration=1.580774355 podStartE2EDuration="4.527276098s" podCreationTimestamp="2026-01-22 10:49:55 +0000 UTC" firstStartedPulling="2026-01-22 10:49:56.199013008 +0000 UTC m=+788.857049118" lastFinishedPulling="2026-01-22 10:49:59.145514721 +0000 UTC m=+791.803550861" observedRunningTime="2026-01-22 10:49:59.526769524 +0000 UTC m=+792.184805644" watchObservedRunningTime="2026-01-22 10:49:59.527276098 +0000 UTC m=+792.185312218" Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.529423 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7z2df" podStartSLOduration=1.68001622 podStartE2EDuration="4.529414147s" podCreationTimestamp="2026-01-22 10:49:55 +0000 UTC" firstStartedPulling="2026-01-22 10:49:56.245072688 +0000 UTC m=+788.903108798" lastFinishedPulling="2026-01-22 10:49:59.094470605 +0000 UTC m=+791.752506725" observedRunningTime="2026-01-22 10:49:59.502794904 +0000 UTC m=+792.160831034" watchObservedRunningTime="2026-01-22 10:49:59.529414147 +0000 UTC m=+792.187450267" Jan 22 10:49:59 crc kubenswrapper[4975]: I0122 10:49:59.555946 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mjdwp" podStartSLOduration=1.431424952 podStartE2EDuration="4.555911037s" podCreationTimestamp="2026-01-22 10:49:55 +0000 UTC" firstStartedPulling="2026-01-22 10:49:55.990104254 +0000 UTC m=+788.648140364" lastFinishedPulling="2026-01-22 10:49:59.114590269 +0000 UTC m=+791.772626449" observedRunningTime="2026-01-22 10:49:59.549723437 +0000 UTC m=+792.207759577" watchObservedRunningTime="2026-01-22 10:49:59.555911037 +0000 UTC m=+792.213947157" Jan 22 10:50:02 crc kubenswrapper[4975]: I0122 10:50:02.518462 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" event={"ID":"7bcd90cd-87a3-4def-bff8-b31659a68fd2","Type":"ContainerStarted","Data":"7b71eea1ca0bb3a442dd5e393fe1b373adafb9a8d477b87d5391b61de455ebd3"} Jan 22 10:50:02 crc kubenswrapper[4975]: I0122 10:50:02.543432 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hqp5c" podStartSLOduration=2.13703016 podStartE2EDuration="7.54340205s" podCreationTimestamp="2026-01-22 10:49:55 +0000 UTC" firstStartedPulling="2026-01-22 10:49:56.157608837 +0000 UTC m=+788.815644947" lastFinishedPulling="2026-01-22 10:50:01.563980717 +0000 UTC m=+794.222016837" observedRunningTime="2026-01-22 10:50:02.539232054 +0000 UTC m=+795.197268204" watchObservedRunningTime="2026-01-22 10:50:02.54340205 +0000 UTC m=+795.201438200" Jan 22 10:50:06 crc kubenswrapper[4975]: I0122 10:50:06.010659 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mjdwp" Jan 22 10:50:06 crc kubenswrapper[4975]: I0122 10:50:06.279551 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:50:06 crc kubenswrapper[4975]: I0122 10:50:06.279944 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:50:06 crc kubenswrapper[4975]: I0122 10:50:06.287561 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:50:06 crc kubenswrapper[4975]: I0122 10:50:06.554549 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-786cfc9d4b-6fj45" Jan 22 10:50:06 crc kubenswrapper[4975]: I0122 10:50:06.645861 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7pt74"] Jan 22 10:50:15 crc kubenswrapper[4975]: I0122 10:50:15.941778 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-r9mqr" Jan 22 10:50:31 crc kubenswrapper[4975]: I0122 10:50:31.707514 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7pt74" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerName="console" containerID="cri-o://8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4" gracePeriod=15 Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.669347 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7pt74_a92ef0de-4810-4e00-8f34-90fcb13c236c/console/0.log" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.669728 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.761904 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7pt74_a92ef0de-4810-4e00-8f34-90fcb13c236c/console/0.log" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.761969 4975 generic.go:334] "Generic (PLEG): container finished" podID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerID="8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4" exitCode=2 Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.762005 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7pt74" event={"ID":"a92ef0de-4810-4e00-8f34-90fcb13c236c","Type":"ContainerDied","Data":"8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4"} Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.762043 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7pt74" event={"ID":"a92ef0de-4810-4e00-8f34-90fcb13c236c","Type":"ContainerDied","Data":"eada703cc76ff389d11b5c42bb7d2e10770e9c8cb38f1a2fc9bfd794522f2d47"} Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.762064 4975 scope.go:117] "RemoveContainer" containerID="8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.762074 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7pt74" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773022 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dshnm\" (UniqueName: \"kubernetes.io/projected/a92ef0de-4810-4e00-8f34-90fcb13c236c-kube-api-access-dshnm\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773157 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-oauth-serving-cert\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773387 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-trusted-ca-bundle\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773436 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-config\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773481 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-oauth-config\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773544 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-service-ca\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.773652 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-serving-cert\") pod \"a92ef0de-4810-4e00-8f34-90fcb13c236c\" (UID: \"a92ef0de-4810-4e00-8f34-90fcb13c236c\") " Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.774302 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-config" (OuterVolumeSpecName: "console-config") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.774531 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.774740 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-service-ca" (OuterVolumeSpecName: "service-ca") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.774838 4975 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.775037 4975 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.775255 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.780572 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.783029 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92ef0de-4810-4e00-8f34-90fcb13c236c-kube-api-access-dshnm" (OuterVolumeSpecName: "kube-api-access-dshnm") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "kube-api-access-dshnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.787265 4975 scope.go:117] "RemoveContainer" containerID="8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4" Jan 22 10:50:32 crc kubenswrapper[4975]: E0122 10:50:32.787871 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4\": container with ID starting with 8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4 not found: ID does not exist" containerID="8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.787924 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4"} err="failed to get container status \"8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4\": rpc error: code = NotFound desc = could not find container \"8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4\": container with ID starting with 8edfc849e37973e4482b9e864c5a2d4f03bd01d14d9548f099a2fc0d88d062c4 not found: ID does not exist" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.788190 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a92ef0de-4810-4e00-8f34-90fcb13c236c" (UID: "a92ef0de-4810-4e00-8f34-90fcb13c236c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.876557 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dshnm\" (UniqueName: \"kubernetes.io/projected/a92ef0de-4810-4e00-8f34-90fcb13c236c-kube-api-access-dshnm\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.876622 4975 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.876648 4975 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.876673 4975 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ef0de-4810-4e00-8f34-90fcb13c236c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:32 crc kubenswrapper[4975]: I0122 10:50:32.876698 4975 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ef0de-4810-4e00-8f34-90fcb13c236c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.096790 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7pt74"] Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.103305 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7pt74"] Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.646612 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2"] Jan 22 10:50:33 crc kubenswrapper[4975]: E0122 10:50:33.646943 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerName="console" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.646962 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerName="console" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.647158 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" containerName="console" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.648417 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.651614 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.666477 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2"] Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.699661 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.699750 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.699849 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td4cs\" (UniqueName: \"kubernetes.io/projected/ff442b38-0291-4b8a-a51e-01cc0397ea47-kube-api-access-td4cs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.769196 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92ef0de-4810-4e00-8f34-90fcb13c236c" path="/var/lib/kubelet/pods/a92ef0de-4810-4e00-8f34-90fcb13c236c/volumes" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.800547 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.800612 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td4cs\" (UniqueName: \"kubernetes.io/projected/ff442b38-0291-4b8a-a51e-01cc0397ea47-kube-api-access-td4cs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.801090 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.801120 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.801540 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:33 crc kubenswrapper[4975]: I0122 10:50:33.823562 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td4cs\" (UniqueName: \"kubernetes.io/projected/ff442b38-0291-4b8a-a51e-01cc0397ea47-kube-api-access-td4cs\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:34 crc kubenswrapper[4975]: I0122 10:50:34.006830 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:34 crc kubenswrapper[4975]: I0122 10:50:34.520576 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2"] Jan 22 10:50:34 crc kubenswrapper[4975]: I0122 10:50:34.780125 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" event={"ID":"ff442b38-0291-4b8a-a51e-01cc0397ea47","Type":"ContainerStarted","Data":"24843a7503589438ba7c3c3a6ab2cce986099adc33e9a5338dd95468810c7925"} Jan 22 10:50:35 crc kubenswrapper[4975]: I0122 10:50:35.791713 4975 generic.go:334] "Generic (PLEG): container finished" podID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerID="0cb57f43f6bc251da1d2cda8c13a5a98e5e4cc3dbbae7711381f908c313c9021" exitCode=0 Jan 22 10:50:35 crc kubenswrapper[4975]: I0122 10:50:35.791819 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" event={"ID":"ff442b38-0291-4b8a-a51e-01cc0397ea47","Type":"ContainerDied","Data":"0cb57f43f6bc251da1d2cda8c13a5a98e5e4cc3dbbae7711381f908c313c9021"} Jan 22 10:50:37 crc kubenswrapper[4975]: I0122 10:50:37.814408 4975 generic.go:334] "Generic (PLEG): container finished" podID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerID="1886e8afb4d7990a7b9e8f7837bbc9856276cf6a416569e86ed905b2f4b7046e" exitCode=0 Jan 22 10:50:37 crc kubenswrapper[4975]: I0122 10:50:37.814517 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" event={"ID":"ff442b38-0291-4b8a-a51e-01cc0397ea47","Type":"ContainerDied","Data":"1886e8afb4d7990a7b9e8f7837bbc9856276cf6a416569e86ed905b2f4b7046e"} Jan 22 10:50:38 crc kubenswrapper[4975]: I0122 10:50:38.825843 4975 generic.go:334] "Generic (PLEG): container finished" podID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerID="6cb69044b65d5be5d59b205b9b3a6e89017b90c3140b9c6bece78cf6e568be58" exitCode=0 Jan 22 10:50:38 crc kubenswrapper[4975]: I0122 10:50:38.825950 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" event={"ID":"ff442b38-0291-4b8a-a51e-01cc0397ea47","Type":"ContainerDied","Data":"6cb69044b65d5be5d59b205b9b3a6e89017b90c3140b9c6bece78cf6e568be58"} Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.135936 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.288198 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-util\") pod \"ff442b38-0291-4b8a-a51e-01cc0397ea47\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.288323 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-bundle\") pod \"ff442b38-0291-4b8a-a51e-01cc0397ea47\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.288387 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td4cs\" (UniqueName: \"kubernetes.io/projected/ff442b38-0291-4b8a-a51e-01cc0397ea47-kube-api-access-td4cs\") pod \"ff442b38-0291-4b8a-a51e-01cc0397ea47\" (UID: \"ff442b38-0291-4b8a-a51e-01cc0397ea47\") " Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.290083 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-bundle" (OuterVolumeSpecName: "bundle") pod "ff442b38-0291-4b8a-a51e-01cc0397ea47" (UID: "ff442b38-0291-4b8a-a51e-01cc0397ea47"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.297664 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff442b38-0291-4b8a-a51e-01cc0397ea47-kube-api-access-td4cs" (OuterVolumeSpecName: "kube-api-access-td4cs") pod "ff442b38-0291-4b8a-a51e-01cc0397ea47" (UID: "ff442b38-0291-4b8a-a51e-01cc0397ea47"). InnerVolumeSpecName "kube-api-access-td4cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.317191 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-util" (OuterVolumeSpecName: "util") pod "ff442b38-0291-4b8a-a51e-01cc0397ea47" (UID: "ff442b38-0291-4b8a-a51e-01cc0397ea47"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.389682 4975 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.389733 4975 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff442b38-0291-4b8a-a51e-01cc0397ea47-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.389753 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td4cs\" (UniqueName: \"kubernetes.io/projected/ff442b38-0291-4b8a-a51e-01cc0397ea47-kube-api-access-td4cs\") on node \"crc\" DevicePath \"\"" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.844230 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" event={"ID":"ff442b38-0291-4b8a-a51e-01cc0397ea47","Type":"ContainerDied","Data":"24843a7503589438ba7c3c3a6ab2cce986099adc33e9a5338dd95468810c7925"} Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.844292 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24843a7503589438ba7c3c3a6ab2cce986099adc33e9a5338dd95468810c7925" Jan 22 10:50:40 crc kubenswrapper[4975]: I0122 10:50:40.844665 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.668644 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz"] Jan 22 10:50:48 crc kubenswrapper[4975]: E0122 10:50:48.669211 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="util" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.669222 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="util" Jan 22 10:50:48 crc kubenswrapper[4975]: E0122 10:50:48.669234 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="pull" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.669240 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="pull" Jan 22 10:50:48 crc kubenswrapper[4975]: E0122 10:50:48.669257 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="extract" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.669263 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="extract" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.669366 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff442b38-0291-4b8a-a51e-01cc0397ea47" containerName="extract" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.669697 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.671885 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.672676 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7slzv" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.672937 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.673149 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.675457 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.694356 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz"] Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.802526 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-webhook-cert\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.802576 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-apiservice-cert\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.802607 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcrw\" (UniqueName: \"kubernetes.io/projected/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-kube-api-access-nmcrw\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.898375 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh"] Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.899140 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.900806 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.902207 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.902303 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6wcxq" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.903616 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-webhook-cert\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.903657 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-apiservice-cert\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.903692 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmcrw\" (UniqueName: \"kubernetes.io/projected/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-kube-api-access-nmcrw\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.922931 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-apiservice-cert\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.923001 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-webhook-cert\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.927683 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmcrw\" (UniqueName: \"kubernetes.io/projected/3c3466c9-e4db-41b6-90c7-132cf6c0dae7-kube-api-access-nmcrw\") pod \"metallb-operator-controller-manager-5b68b5dc66-l54nz\" (UID: \"3c3466c9-e4db-41b6-90c7-132cf6c0dae7\") " pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.947134 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh"] Jan 22 10:50:48 crc kubenswrapper[4975]: I0122 10:50:48.985845 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.004273 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-apiservice-cert\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.004484 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sl6c\" (UniqueName: \"kubernetes.io/projected/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-kube-api-access-9sl6c\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.004529 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-webhook-cert\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.105405 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sl6c\" (UniqueName: \"kubernetes.io/projected/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-kube-api-access-9sl6c\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.105689 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-webhook-cert\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.105759 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-apiservice-cert\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.112018 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-webhook-cert\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.124094 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-apiservice-cert\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.140189 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sl6c\" (UniqueName: \"kubernetes.io/projected/c6b7738a-4a9e-4f09-a3a0-5a413eed76b9-kube-api-access-9sl6c\") pod \"metallb-operator-webhook-server-5b6ddfffdc-46wrh\" (UID: \"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9\") " pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.254725 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.460418 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh"] Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.475933 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz"] Jan 22 10:50:49 crc kubenswrapper[4975]: W0122 10:50:49.486621 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c3466c9_e4db_41b6_90c7_132cf6c0dae7.slice/crio-1ecc2cf266f9c7a223a33bce93b9082aafadab7fbab43e987d7487d8a8a9378e WatchSource:0}: Error finding container 1ecc2cf266f9c7a223a33bce93b9082aafadab7fbab43e987d7487d8a8a9378e: Status 404 returned error can't find the container with id 1ecc2cf266f9c7a223a33bce93b9082aafadab7fbab43e987d7487d8a8a9378e Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.901022 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" event={"ID":"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9","Type":"ContainerStarted","Data":"8e0bd97b9d13703453d46e93594a9fb9661aed3a6b985ea947a7d5e47e326cfc"} Jan 22 10:50:49 crc kubenswrapper[4975]: I0122 10:50:49.902221 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" event={"ID":"3c3466c9-e4db-41b6-90c7-132cf6c0dae7","Type":"ContainerStarted","Data":"1ecc2cf266f9c7a223a33bce93b9082aafadab7fbab43e987d7487d8a8a9378e"} Jan 22 10:50:53 crc kubenswrapper[4975]: I0122 10:50:53.931427 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" event={"ID":"3c3466c9-e4db-41b6-90c7-132cf6c0dae7","Type":"ContainerStarted","Data":"f6d89f12354615df74cf9f6e6d40162a488b1c922d700350f6b09178cf610f79"} Jan 22 10:50:53 crc kubenswrapper[4975]: I0122 10:50:53.931994 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:50:53 crc kubenswrapper[4975]: I0122 10:50:53.961447 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" podStartSLOduration=2.579594045 podStartE2EDuration="5.961424381s" podCreationTimestamp="2026-01-22 10:50:48 +0000 UTC" firstStartedPulling="2026-01-22 10:50:49.489248058 +0000 UTC m=+842.147284168" lastFinishedPulling="2026-01-22 10:50:52.871078394 +0000 UTC m=+845.529114504" observedRunningTime="2026-01-22 10:50:53.954399548 +0000 UTC m=+846.612435688" watchObservedRunningTime="2026-01-22 10:50:53.961424381 +0000 UTC m=+846.619460491" Jan 22 10:50:56 crc kubenswrapper[4975]: I0122 10:50:56.947712 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" event={"ID":"c6b7738a-4a9e-4f09-a3a0-5a413eed76b9","Type":"ContainerStarted","Data":"891cdd7641c013e9fe00d1e23a50fe4f70ec4825f15a29c926089c44e4153c3e"} Jan 22 10:50:56 crc kubenswrapper[4975]: I0122 10:50:56.948044 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:50:56 crc kubenswrapper[4975]: I0122 10:50:56.976664 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" podStartSLOduration=2.132367101 podStartE2EDuration="8.976649716s" podCreationTimestamp="2026-01-22 10:50:48 +0000 UTC" firstStartedPulling="2026-01-22 10:50:49.479088798 +0000 UTC m=+842.137124928" lastFinishedPulling="2026-01-22 10:50:56.323371423 +0000 UTC m=+848.981407543" observedRunningTime="2026-01-22 10:50:56.9740144 +0000 UTC m=+849.632050510" watchObservedRunningTime="2026-01-22 10:50:56.976649716 +0000 UTC m=+849.634685826" Jan 22 10:51:09 crc kubenswrapper[4975]: I0122 10:51:09.260224 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b6ddfffdc-46wrh" Jan 22 10:51:28 crc kubenswrapper[4975]: I0122 10:51:28.989798 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b68b5dc66-l54nz" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.809002 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vcvnb"] Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.812491 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.818883 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.819114 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.819174 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nz2bd" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.822633 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg"] Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.823224 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.828291 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.848159 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg"] Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.897844 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2knnf"] Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.898669 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2knnf" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.901654 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.901861 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.901978 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-n48xb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.902776 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.909562 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-d2mbh"] Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.910623 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.916121 4975 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.919754 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-d2mbh"] Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966041 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-conf\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966094 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-startup\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966458 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-sockets\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966540 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-reloader\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966567 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d54a39-8b95-459f-a047-244ed0e3a156-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6tqxg\" (UID: \"b0d54a39-8b95-459f-a047-244ed0e3a156\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966645 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmpd\" (UniqueName: \"kubernetes.io/projected/b0d54a39-8b95-459f-a047-244ed0e3a156-kube-api-access-njmpd\") pod \"frr-k8s-webhook-server-7df86c4f6c-6tqxg\" (UID: \"b0d54a39-8b95-459f-a047-244ed0e3a156\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966697 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-metrics-certs\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966716 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5kr\" (UniqueName: \"kubernetes.io/projected/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-kube-api-access-sc5kr\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:29 crc kubenswrapper[4975]: I0122 10:51:29.966762 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-metrics\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067701 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067754 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1b770e2d-a830-4d27-8567-52246e202aa3-metallb-excludel2\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067785 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-metrics\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067855 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-cert\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067883 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-conf\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067907 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-startup\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067928 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-metrics-certs\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.067957 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884hj\" (UniqueName: \"kubernetes.io/projected/88d55612-c72e-4893-8ef2-a524fa613e7a-kube-api-access-884hj\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068008 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-sockets\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068031 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2x7\" (UniqueName: \"kubernetes.io/projected/1b770e2d-a830-4d27-8567-52246e202aa3-kube-api-access-6h2x7\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068065 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-reloader\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068086 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d54a39-8b95-459f-a047-244ed0e3a156-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6tqxg\" (UID: \"b0d54a39-8b95-459f-a047-244ed0e3a156\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068118 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-metrics-certs\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068147 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmpd\" (UniqueName: \"kubernetes.io/projected/b0d54a39-8b95-459f-a047-244ed0e3a156-kube-api-access-njmpd\") pod \"frr-k8s-webhook-server-7df86c4f6c-6tqxg\" (UID: \"b0d54a39-8b95-459f-a047-244ed0e3a156\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068170 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc5kr\" (UniqueName: \"kubernetes.io/projected/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-kube-api-access-sc5kr\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068188 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-metrics-certs\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068186 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-metrics\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068555 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-sockets\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068817 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-conf\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.068844 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-frr-startup\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.069359 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-reloader\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.075939 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-metrics-certs\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.086885 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmpd\" (UniqueName: \"kubernetes.io/projected/b0d54a39-8b95-459f-a047-244ed0e3a156-kube-api-access-njmpd\") pod \"frr-k8s-webhook-server-7df86c4f6c-6tqxg\" (UID: \"b0d54a39-8b95-459f-a047-244ed0e3a156\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.089640 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc5kr\" (UniqueName: \"kubernetes.io/projected/f8d84cf3-8bf0-47bb-ae46-eedcc76c4892-kube-api-access-sc5kr\") pod \"frr-k8s-vcvnb\" (UID: \"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892\") " pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.090217 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d54a39-8b95-459f-a047-244ed0e3a156-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6tqxg\" (UID: \"b0d54a39-8b95-459f-a047-244ed0e3a156\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.135829 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.150063 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.168846 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-cert\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.168890 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-metrics-certs\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.168917 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884hj\" (UniqueName: \"kubernetes.io/projected/88d55612-c72e-4893-8ef2-a524fa613e7a-kube-api-access-884hj\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.168960 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h2x7\" (UniqueName: \"kubernetes.io/projected/1b770e2d-a830-4d27-8567-52246e202aa3-kube-api-access-6h2x7\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.168996 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-metrics-certs\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.169020 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: E0122 10:51:30.169031 4975 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 22 10:51:30 crc kubenswrapper[4975]: E0122 10:51:30.169103 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-metrics-certs podName:88d55612-c72e-4893-8ef2-a524fa613e7a nodeName:}" failed. No retries permitted until 2026-01-22 10:51:30.669083807 +0000 UTC m=+883.327119917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-metrics-certs") pod "controller-6968d8fdc4-d2mbh" (UID: "88d55612-c72e-4893-8ef2-a524fa613e7a") : secret "controller-certs-secret" not found Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.169036 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1b770e2d-a830-4d27-8567-52246e202aa3-metallb-excludel2\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: E0122 10:51:30.169783 4975 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 10:51:30 crc kubenswrapper[4975]: E0122 10:51:30.169872 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist podName:1b770e2d-a830-4d27-8567-52246e202aa3 nodeName:}" failed. No retries permitted until 2026-01-22 10:51:30.669850556 +0000 UTC m=+883.327886766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist") pod "speaker-2knnf" (UID: "1b770e2d-a830-4d27-8567-52246e202aa3") : secret "metallb-memberlist" not found Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.169920 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1b770e2d-a830-4d27-8567-52246e202aa3-metallb-excludel2\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.172147 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-metrics-certs\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.172365 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-cert\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.202323 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884hj\" (UniqueName: \"kubernetes.io/projected/88d55612-c72e-4893-8ef2-a524fa613e7a-kube-api-access-884hj\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.202778 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h2x7\" (UniqueName: \"kubernetes.io/projected/1b770e2d-a830-4d27-8567-52246e202aa3-kube-api-access-6h2x7\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.594258 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg"] Jan 22 10:51:30 crc kubenswrapper[4975]: W0122 10:51:30.601891 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d54a39_8b95_459f_a047_244ed0e3a156.slice/crio-55cc09d460ea37606b5c0f2cb6b1db4d4244cad991472f85bf5e06b85d29f22f WatchSource:0}: Error finding container 55cc09d460ea37606b5c0f2cb6b1db4d4244cad991472f85bf5e06b85d29f22f: Status 404 returned error can't find the container with id 55cc09d460ea37606b5c0f2cb6b1db4d4244cad991472f85bf5e06b85d29f22f Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.679158 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-metrics-certs\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.679392 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:30 crc kubenswrapper[4975]: E0122 10:51:30.679600 4975 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 10:51:30 crc kubenswrapper[4975]: E0122 10:51:30.679705 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist podName:1b770e2d-a830-4d27-8567-52246e202aa3 nodeName:}" failed. No retries permitted until 2026-01-22 10:51:31.679679108 +0000 UTC m=+884.337715258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist") pod "speaker-2knnf" (UID: "1b770e2d-a830-4d27-8567-52246e202aa3") : secret "metallb-memberlist" not found Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.687676 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88d55612-c72e-4893-8ef2-a524fa613e7a-metrics-certs\") pod \"controller-6968d8fdc4-d2mbh\" (UID: \"88d55612-c72e-4893-8ef2-a524fa613e7a\") " pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:30 crc kubenswrapper[4975]: I0122 10:51:30.822887 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.056513 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-d2mbh"] Jan 22 10:51:31 crc kubenswrapper[4975]: W0122 10:51:31.064752 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88d55612_c72e_4893_8ef2_a524fa613e7a.slice/crio-db769b918c2d272c3fd048be1eee7a203f143601b15b61b4792ef6035e434f25 WatchSource:0}: Error finding container db769b918c2d272c3fd048be1eee7a203f143601b15b61b4792ef6035e434f25: Status 404 returned error can't find the container with id db769b918c2d272c3fd048be1eee7a203f143601b15b61b4792ef6035e434f25 Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.184052 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"8cc38c14ca43e296543a69c219701e8556dfc2d576601e70cfe8d36024447c23"} Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.184878 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" event={"ID":"b0d54a39-8b95-459f-a047-244ed0e3a156","Type":"ContainerStarted","Data":"55cc09d460ea37606b5c0f2cb6b1db4d4244cad991472f85bf5e06b85d29f22f"} Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.185668 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-d2mbh" event={"ID":"88d55612-c72e-4893-8ef2-a524fa613e7a","Type":"ContainerStarted","Data":"db769b918c2d272c3fd048be1eee7a203f143601b15b61b4792ef6035e434f25"} Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.703966 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.709359 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b770e2d-a830-4d27-8567-52246e202aa3-memberlist\") pod \"speaker-2knnf\" (UID: \"1b770e2d-a830-4d27-8567-52246e202aa3\") " pod="metallb-system/speaker-2knnf" Jan 22 10:51:31 crc kubenswrapper[4975]: I0122 10:51:31.712006 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2knnf" Jan 22 10:51:32 crc kubenswrapper[4975]: I0122 10:51:32.208161 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-d2mbh" event={"ID":"88d55612-c72e-4893-8ef2-a524fa613e7a","Type":"ContainerStarted","Data":"7860f53d86d153687a6b41951d5d67eb9416623f2854572241c169e181a350ee"} Jan 22 10:51:32 crc kubenswrapper[4975]: I0122 10:51:32.208202 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-d2mbh" event={"ID":"88d55612-c72e-4893-8ef2-a524fa613e7a","Type":"ContainerStarted","Data":"f2974a8f3afd72b80e052653831cf250166474b5c1eaad08e086165647dcaa34"} Jan 22 10:51:32 crc kubenswrapper[4975]: I0122 10:51:32.208270 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:32 crc kubenswrapper[4975]: I0122 10:51:32.213617 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2knnf" event={"ID":"1b770e2d-a830-4d27-8567-52246e202aa3","Type":"ContainerStarted","Data":"de8e7c25fff1daf87dc00ec7d298b08bef8c800970045070d5feaaeecf34a5a1"} Jan 22 10:51:32 crc kubenswrapper[4975]: I0122 10:51:32.213656 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2knnf" event={"ID":"1b770e2d-a830-4d27-8567-52246e202aa3","Type":"ContainerStarted","Data":"86dd7a974b2772f61755b3bb18050519b96fcf0669f6a6e1e24505d48708cfaf"} Jan 22 10:51:32 crc kubenswrapper[4975]: I0122 10:51:32.229440 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-d2mbh" podStartSLOduration=3.229419875 podStartE2EDuration="3.229419875s" podCreationTimestamp="2026-01-22 10:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:51:32.227453145 +0000 UTC m=+884.885489255" watchObservedRunningTime="2026-01-22 10:51:32.229419875 +0000 UTC m=+884.887455985" Jan 22 10:51:33 crc kubenswrapper[4975]: I0122 10:51:33.221344 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2knnf" event={"ID":"1b770e2d-a830-4d27-8567-52246e202aa3","Type":"ContainerStarted","Data":"573b92150405ba72507500489871bbf9dd580f79798cba2fde859ec8ea82ef4f"} Jan 22 10:51:33 crc kubenswrapper[4975]: I0122 10:51:33.221720 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2knnf" Jan 22 10:51:33 crc kubenswrapper[4975]: I0122 10:51:33.245357 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2knnf" podStartSLOduration=4.245341943 podStartE2EDuration="4.245341943s" podCreationTimestamp="2026-01-22 10:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:51:33.243963119 +0000 UTC m=+885.901999229" watchObservedRunningTime="2026-01-22 10:51:33.245341943 +0000 UTC m=+885.903378053" Jan 22 10:51:38 crc kubenswrapper[4975]: I0122 10:51:38.257770 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" event={"ID":"b0d54a39-8b95-459f-a047-244ed0e3a156","Type":"ContainerStarted","Data":"9abdfe6541e5d931ebe03abd94444bdb28e72ff4f2bee7029f1678c6cb9a8149"} Jan 22 10:51:38 crc kubenswrapper[4975]: I0122 10:51:38.259032 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:38 crc kubenswrapper[4975]: I0122 10:51:38.259204 4975 generic.go:334] "Generic (PLEG): container finished" podID="f8d84cf3-8bf0-47bb-ae46-eedcc76c4892" containerID="7b697aed198264f558ae760ec49b976609db4d7ab1a7c9cd824a15354afb15a3" exitCode=0 Jan 22 10:51:38 crc kubenswrapper[4975]: I0122 10:51:38.259244 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerDied","Data":"7b697aed198264f558ae760ec49b976609db4d7ab1a7c9cd824a15354afb15a3"} Jan 22 10:51:38 crc kubenswrapper[4975]: I0122 10:51:38.308820 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" podStartSLOduration=2.725581632 podStartE2EDuration="9.308806707s" podCreationTimestamp="2026-01-22 10:51:29 +0000 UTC" firstStartedPulling="2026-01-22 10:51:30.606175617 +0000 UTC m=+883.264211767" lastFinishedPulling="2026-01-22 10:51:37.189400692 +0000 UTC m=+889.847436842" observedRunningTime="2026-01-22 10:51:38.278830811 +0000 UTC m=+890.936866921" watchObservedRunningTime="2026-01-22 10:51:38.308806707 +0000 UTC m=+890.966842817" Jan 22 10:51:39 crc kubenswrapper[4975]: I0122 10:51:39.268311 4975 generic.go:334] "Generic (PLEG): container finished" podID="f8d84cf3-8bf0-47bb-ae46-eedcc76c4892" containerID="e18497b11a7bcbe4492e38edc2cd4612d9385902a6b0508172c3f179150a3262" exitCode=0 Jan 22 10:51:39 crc kubenswrapper[4975]: I0122 10:51:39.268418 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerDied","Data":"e18497b11a7bcbe4492e38edc2cd4612d9385902a6b0508172c3f179150a3262"} Jan 22 10:51:40 crc kubenswrapper[4975]: I0122 10:51:40.282719 4975 generic.go:334] "Generic (PLEG): container finished" podID="f8d84cf3-8bf0-47bb-ae46-eedcc76c4892" containerID="effe49efa11932aeb3de9cfd8606c428422979dfae862ed2a6c9760056588699" exitCode=0 Jan 22 10:51:40 crc kubenswrapper[4975]: I0122 10:51:40.282822 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerDied","Data":"effe49efa11932aeb3de9cfd8606c428422979dfae862ed2a6c9760056588699"} Jan 22 10:51:41 crc kubenswrapper[4975]: I0122 10:51:41.299871 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"c035c140bbb16624baa2a7d23364899ccbfd3391ec63bf6943d28f1fa645af6b"} Jan 22 10:51:41 crc kubenswrapper[4975]: I0122 10:51:41.300213 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"2ca9d125e6a24cad58a19ad68d578766ee1dd76f4720ee70d1a61592192ab6a6"} Jan 22 10:51:41 crc kubenswrapper[4975]: I0122 10:51:41.300236 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"9f672a3d29cb6dbf018a1274b41acd297c2795b8697fb60346097e8691fb58ea"} Jan 22 10:51:41 crc kubenswrapper[4975]: I0122 10:51:41.300257 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"452e7ae31047323aec3187de5f693f4f8b392d9ad62262ae758801e263421745"} Jan 22 10:51:41 crc kubenswrapper[4975]: I0122 10:51:41.300276 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"0a7013d4e3a2a03a20baaa28b60d5e76130959c5d2d25baff19e81ceee2907d8"} Jan 22 10:51:41 crc kubenswrapper[4975]: I0122 10:51:41.717706 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2knnf" Jan 22 10:51:42 crc kubenswrapper[4975]: I0122 10:51:42.315174 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vcvnb" event={"ID":"f8d84cf3-8bf0-47bb-ae46-eedcc76c4892","Type":"ContainerStarted","Data":"a2ba915e1abdfe1f1903d47a5d4c8dcd196c8290b2c83700dce84493431de230"} Jan 22 10:51:42 crc kubenswrapper[4975]: I0122 10:51:42.315653 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:42 crc kubenswrapper[4975]: I0122 10:51:42.346564 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vcvnb" podStartSLOduration=6.483778954 podStartE2EDuration="13.346547097s" podCreationTimestamp="2026-01-22 10:51:29 +0000 UTC" firstStartedPulling="2026-01-22 10:51:30.27848755 +0000 UTC m=+882.936523660" lastFinishedPulling="2026-01-22 10:51:37.141255663 +0000 UTC m=+889.799291803" observedRunningTime="2026-01-22 10:51:42.345490571 +0000 UTC m=+895.003526691" watchObservedRunningTime="2026-01-22 10:51:42.346547097 +0000 UTC m=+895.004583197" Jan 22 10:51:45 crc kubenswrapper[4975]: I0122 10:51:45.137399 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:45 crc kubenswrapper[4975]: I0122 10:51:45.185013 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.425016 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rvw6h"] Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.426234 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.429062 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.429078 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-d86r4" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.429533 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.445495 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rvw6h"] Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.553363 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsht\" (UniqueName: \"kubernetes.io/projected/8f4b0804-c42b-4aad-bb37-d55746bcffc6-kube-api-access-pnsht\") pod \"openstack-operator-index-rvw6h\" (UID: \"8f4b0804-c42b-4aad-bb37-d55746bcffc6\") " pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.655060 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsht\" (UniqueName: \"kubernetes.io/projected/8f4b0804-c42b-4aad-bb37-d55746bcffc6-kube-api-access-pnsht\") pod \"openstack-operator-index-rvw6h\" (UID: \"8f4b0804-c42b-4aad-bb37-d55746bcffc6\") " pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.684928 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsht\" (UniqueName: \"kubernetes.io/projected/8f4b0804-c42b-4aad-bb37-d55746bcffc6-kube-api-access-pnsht\") pod \"openstack-operator-index-rvw6h\" (UID: \"8f4b0804-c42b-4aad-bb37-d55746bcffc6\") " pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:48 crc kubenswrapper[4975]: I0122 10:51:48.742681 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:49 crc kubenswrapper[4975]: I0122 10:51:49.253800 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rvw6h"] Jan 22 10:51:49 crc kubenswrapper[4975]: W0122 10:51:49.262788 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4b0804_c42b_4aad_bb37_d55746bcffc6.slice/crio-36b93b3f649015cc7c1fe2533023e6f63e42ffd589e449735f6689f21ba43264 WatchSource:0}: Error finding container 36b93b3f649015cc7c1fe2533023e6f63e42ffd589e449735f6689f21ba43264: Status 404 returned error can't find the container with id 36b93b3f649015cc7c1fe2533023e6f63e42ffd589e449735f6689f21ba43264 Jan 22 10:51:49 crc kubenswrapper[4975]: I0122 10:51:49.380940 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvw6h" event={"ID":"8f4b0804-c42b-4aad-bb37-d55746bcffc6","Type":"ContainerStarted","Data":"36b93b3f649015cc7c1fe2533023e6f63e42ffd589e449735f6689f21ba43264"} Jan 22 10:51:50 crc kubenswrapper[4975]: I0122 10:51:50.145567 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vcvnb" Jan 22 10:51:50 crc kubenswrapper[4975]: I0122 10:51:50.162213 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6tqxg" Jan 22 10:51:50 crc kubenswrapper[4975]: I0122 10:51:50.827718 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-d2mbh" Jan 22 10:51:52 crc kubenswrapper[4975]: I0122 10:51:52.404457 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvw6h" event={"ID":"8f4b0804-c42b-4aad-bb37-d55746bcffc6","Type":"ContainerStarted","Data":"1610805e5db75a921d1f47b0edbd4a8cf7ea96bb06d362d81db6ccbff746dc28"} Jan 22 10:51:52 crc kubenswrapper[4975]: I0122 10:51:52.424847 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rvw6h" podStartSLOduration=1.776377047 podStartE2EDuration="4.424832914s" podCreationTimestamp="2026-01-22 10:51:48 +0000 UTC" firstStartedPulling="2026-01-22 10:51:49.26709896 +0000 UTC m=+901.925135110" lastFinishedPulling="2026-01-22 10:51:51.915554867 +0000 UTC m=+904.573590977" observedRunningTime="2026-01-22 10:51:52.422368163 +0000 UTC m=+905.080404353" watchObservedRunningTime="2026-01-22 10:51:52.424832914 +0000 UTC m=+905.082869024" Jan 22 10:51:57 crc kubenswrapper[4975]: I0122 10:51:57.436872 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:51:57 crc kubenswrapper[4975]: I0122 10:51:57.437529 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:51:58 crc kubenswrapper[4975]: I0122 10:51:58.743319 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:58 crc kubenswrapper[4975]: I0122 10:51:58.743422 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:58 crc kubenswrapper[4975]: I0122 10:51:58.785049 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:51:59 crc kubenswrapper[4975]: I0122 10:51:59.504959 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rvw6h" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.072669 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t"] Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.079248 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.082444 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-cr5z4" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.089092 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t"] Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.175582 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-util\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.175655 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-bundle\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.175730 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxg9j\" (UniqueName: \"kubernetes.io/projected/d9c59824-f969-44a3-ae16-9e2d998ca883-kube-api-access-wxg9j\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.277116 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-util\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.277236 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-bundle\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.277314 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxg9j\" (UniqueName: \"kubernetes.io/projected/d9c59824-f969-44a3-ae16-9e2d998ca883-kube-api-access-wxg9j\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.277701 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-util\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.277730 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-bundle\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.295113 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxg9j\" (UniqueName: \"kubernetes.io/projected/d9c59824-f969-44a3-ae16-9e2d998ca883-kube-api-access-wxg9j\") pod \"3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.427868 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:02 crc kubenswrapper[4975]: I0122 10:52:02.682419 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t"] Jan 22 10:52:02 crc kubenswrapper[4975]: W0122 10:52:02.683246 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9c59824_f969_44a3_ae16_9e2d998ca883.slice/crio-22349dd4cc85ebcc0a6bddb0da97510d24fbac4cc3955263967a350c797d4c4d WatchSource:0}: Error finding container 22349dd4cc85ebcc0a6bddb0da97510d24fbac4cc3955263967a350c797d4c4d: Status 404 returned error can't find the container with id 22349dd4cc85ebcc0a6bddb0da97510d24fbac4cc3955263967a350c797d4c4d Jan 22 10:52:03 crc kubenswrapper[4975]: I0122 10:52:03.505063 4975 generic.go:334] "Generic (PLEG): container finished" podID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerID="11c8d2f53f4b052c0a4ae2b62bbd415d6cc8b481bf27523c6bf7052bba418053" exitCode=0 Jan 22 10:52:03 crc kubenswrapper[4975]: I0122 10:52:03.505158 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" event={"ID":"d9c59824-f969-44a3-ae16-9e2d998ca883","Type":"ContainerDied","Data":"11c8d2f53f4b052c0a4ae2b62bbd415d6cc8b481bf27523c6bf7052bba418053"} Jan 22 10:52:03 crc kubenswrapper[4975]: I0122 10:52:03.505468 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" event={"ID":"d9c59824-f969-44a3-ae16-9e2d998ca883","Type":"ContainerStarted","Data":"22349dd4cc85ebcc0a6bddb0da97510d24fbac4cc3955263967a350c797d4c4d"} Jan 22 10:52:04 crc kubenswrapper[4975]: I0122 10:52:04.517457 4975 generic.go:334] "Generic (PLEG): container finished" podID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerID="5766348f65a90b70b5160bdb816c50799aa739e4c45754e5f3f3f2ba09ae61ef" exitCode=0 Jan 22 10:52:04 crc kubenswrapper[4975]: I0122 10:52:04.517527 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" event={"ID":"d9c59824-f969-44a3-ae16-9e2d998ca883","Type":"ContainerDied","Data":"5766348f65a90b70b5160bdb816c50799aa739e4c45754e5f3f3f2ba09ae61ef"} Jan 22 10:52:05 crc kubenswrapper[4975]: I0122 10:52:05.529242 4975 generic.go:334] "Generic (PLEG): container finished" podID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerID="f7b853e48f4961658bbe5edc0d687725ca5699cf2aa049a7ce5962bcfb0062ba" exitCode=0 Jan 22 10:52:05 crc kubenswrapper[4975]: I0122 10:52:05.529407 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" event={"ID":"d9c59824-f969-44a3-ae16-9e2d998ca883","Type":"ContainerDied","Data":"f7b853e48f4961658bbe5edc0d687725ca5699cf2aa049a7ce5962bcfb0062ba"} Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.909143 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.955428 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxg9j\" (UniqueName: \"kubernetes.io/projected/d9c59824-f969-44a3-ae16-9e2d998ca883-kube-api-access-wxg9j\") pod \"d9c59824-f969-44a3-ae16-9e2d998ca883\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.957201 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-util\") pod \"d9c59824-f969-44a3-ae16-9e2d998ca883\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.957255 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-bundle\") pod \"d9c59824-f969-44a3-ae16-9e2d998ca883\" (UID: \"d9c59824-f969-44a3-ae16-9e2d998ca883\") " Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.958025 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-bundle" (OuterVolumeSpecName: "bundle") pod "d9c59824-f969-44a3-ae16-9e2d998ca883" (UID: "d9c59824-f969-44a3-ae16-9e2d998ca883"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.965016 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c59824-f969-44a3-ae16-9e2d998ca883-kube-api-access-wxg9j" (OuterVolumeSpecName: "kube-api-access-wxg9j") pod "d9c59824-f969-44a3-ae16-9e2d998ca883" (UID: "d9c59824-f969-44a3-ae16-9e2d998ca883"). InnerVolumeSpecName "kube-api-access-wxg9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:52:06 crc kubenswrapper[4975]: I0122 10:52:06.987791 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-util" (OuterVolumeSpecName: "util") pod "d9c59824-f969-44a3-ae16-9e2d998ca883" (UID: "d9c59824-f969-44a3-ae16-9e2d998ca883"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:07 crc kubenswrapper[4975]: I0122 10:52:07.059473 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxg9j\" (UniqueName: \"kubernetes.io/projected/d9c59824-f969-44a3-ae16-9e2d998ca883-kube-api-access-wxg9j\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:07 crc kubenswrapper[4975]: I0122 10:52:07.059524 4975 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:07 crc kubenswrapper[4975]: I0122 10:52:07.059546 4975 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9c59824-f969-44a3-ae16-9e2d998ca883-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:07 crc kubenswrapper[4975]: I0122 10:52:07.551407 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" event={"ID":"d9c59824-f969-44a3-ae16-9e2d998ca883","Type":"ContainerDied","Data":"22349dd4cc85ebcc0a6bddb0da97510d24fbac4cc3955263967a350c797d4c4d"} Jan 22 10:52:07 crc kubenswrapper[4975]: I0122 10:52:07.551481 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22349dd4cc85ebcc0a6bddb0da97510d24fbac4cc3955263967a350c797d4c4d" Jan 22 10:52:07 crc kubenswrapper[4975]: I0122 10:52:07.551864 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.291558 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf"] Jan 22 10:52:10 crc kubenswrapper[4975]: E0122 10:52:10.292284 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="pull" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.292296 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="pull" Jan 22 10:52:10 crc kubenswrapper[4975]: E0122 10:52:10.292313 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="util" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.292319 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="util" Jan 22 10:52:10 crc kubenswrapper[4975]: E0122 10:52:10.292343 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="extract" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.292350 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="extract" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.292453 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c59824-f969-44a3-ae16-9e2d998ca883" containerName="extract" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.292812 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.294694 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-l9m8f" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.305597 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshl5\" (UniqueName: \"kubernetes.io/projected/0c16cec1-1906-45c5-83be-5b14f7e44321-kube-api-access-fshl5\") pod \"openstack-operator-controller-init-9b49577dc-tm6xf\" (UID: \"0c16cec1-1906-45c5-83be-5b14f7e44321\") " pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.318844 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf"] Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.406636 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fshl5\" (UniqueName: \"kubernetes.io/projected/0c16cec1-1906-45c5-83be-5b14f7e44321-kube-api-access-fshl5\") pod \"openstack-operator-controller-init-9b49577dc-tm6xf\" (UID: \"0c16cec1-1906-45c5-83be-5b14f7e44321\") " pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.434524 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fshl5\" (UniqueName: \"kubernetes.io/projected/0c16cec1-1906-45c5-83be-5b14f7e44321-kube-api-access-fshl5\") pod \"openstack-operator-controller-init-9b49577dc-tm6xf\" (UID: \"0c16cec1-1906-45c5-83be-5b14f7e44321\") " pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:10 crc kubenswrapper[4975]: I0122 10:52:10.612103 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:11 crc kubenswrapper[4975]: I0122 10:52:11.080590 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf"] Jan 22 10:52:11 crc kubenswrapper[4975]: I0122 10:52:11.577925 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" event={"ID":"0c16cec1-1906-45c5-83be-5b14f7e44321","Type":"ContainerStarted","Data":"f692344ffd038591c1604c6b4fbe18caccaaceb1c4bceaa3efbd4a30cb294533"} Jan 22 10:52:15 crc kubenswrapper[4975]: I0122 10:52:15.604391 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" event={"ID":"0c16cec1-1906-45c5-83be-5b14f7e44321","Type":"ContainerStarted","Data":"70edf3eb4bf6de13da26a9caeffd276f7a2579cdec05ab9a1fca2cc74b610e0c"} Jan 22 10:52:15 crc kubenswrapper[4975]: I0122 10:52:15.605135 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:15 crc kubenswrapper[4975]: I0122 10:52:15.636459 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" podStartSLOduration=1.612206121 podStartE2EDuration="5.636441355s" podCreationTimestamp="2026-01-22 10:52:10 +0000 UTC" firstStartedPulling="2026-01-22 10:52:11.087837978 +0000 UTC m=+923.745874098" lastFinishedPulling="2026-01-22 10:52:15.112073222 +0000 UTC m=+927.770109332" observedRunningTime="2026-01-22 10:52:15.634296812 +0000 UTC m=+928.292332972" watchObservedRunningTime="2026-01-22 10:52:15.636441355 +0000 UTC m=+928.294477475" Jan 22 10:52:20 crc kubenswrapper[4975]: I0122 10:52:20.616234 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-9b49577dc-tm6xf" Jan 22 10:52:27 crc kubenswrapper[4975]: I0122 10:52:27.435819 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:52:27 crc kubenswrapper[4975]: I0122 10:52:27.436295 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:52:28 crc kubenswrapper[4975]: I0122 10:52:28.819768 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pc27q"] Jan 22 10:52:28 crc kubenswrapper[4975]: I0122 10:52:28.821027 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:28 crc kubenswrapper[4975]: I0122 10:52:28.833396 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pc27q"] Jan 22 10:52:28 crc kubenswrapper[4975]: I0122 10:52:28.996802 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-catalog-content\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:28 crc kubenswrapper[4975]: I0122 10:52:28.996917 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-utilities\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:28 crc kubenswrapper[4975]: I0122 10:52:28.996948 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj8p7\" (UniqueName: \"kubernetes.io/projected/d4ce709d-306c-4050-87f9-347861884dbc-kube-api-access-hj8p7\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.098804 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-utilities\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.098860 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj8p7\" (UniqueName: \"kubernetes.io/projected/d4ce709d-306c-4050-87f9-347861884dbc-kube-api-access-hj8p7\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.098900 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-catalog-content\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.099282 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-utilities\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.099284 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-catalog-content\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.126276 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj8p7\" (UniqueName: \"kubernetes.io/projected/d4ce709d-306c-4050-87f9-347861884dbc-kube-api-access-hj8p7\") pod \"certified-operators-pc27q\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.151867 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.610031 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pc27q"] Jan 22 10:52:29 crc kubenswrapper[4975]: W0122 10:52:29.613814 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ce709d_306c_4050_87f9_347861884dbc.slice/crio-1362651f090f673b762f128900d9fde835ea5022eec4a7eb650c45c8cf0b6243 WatchSource:0}: Error finding container 1362651f090f673b762f128900d9fde835ea5022eec4a7eb650c45c8cf0b6243: Status 404 returned error can't find the container with id 1362651f090f673b762f128900d9fde835ea5022eec4a7eb650c45c8cf0b6243 Jan 22 10:52:29 crc kubenswrapper[4975]: I0122 10:52:29.691834 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerStarted","Data":"1362651f090f673b762f128900d9fde835ea5022eec4a7eb650c45c8cf0b6243"} Jan 22 10:52:30 crc kubenswrapper[4975]: I0122 10:52:30.700685 4975 generic.go:334] "Generic (PLEG): container finished" podID="d4ce709d-306c-4050-87f9-347861884dbc" containerID="b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737" exitCode=0 Jan 22 10:52:30 crc kubenswrapper[4975]: I0122 10:52:30.700838 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerDied","Data":"b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737"} Jan 22 10:52:31 crc kubenswrapper[4975]: I0122 10:52:31.708068 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerStarted","Data":"7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b"} Jan 22 10:52:32 crc kubenswrapper[4975]: I0122 10:52:32.716282 4975 generic.go:334] "Generic (PLEG): container finished" podID="d4ce709d-306c-4050-87f9-347861884dbc" containerID="7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b" exitCode=0 Jan 22 10:52:32 crc kubenswrapper[4975]: I0122 10:52:32.716345 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerDied","Data":"7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b"} Jan 22 10:52:33 crc kubenswrapper[4975]: I0122 10:52:33.724841 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerStarted","Data":"75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7"} Jan 22 10:52:33 crc kubenswrapper[4975]: I0122 10:52:33.744118 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pc27q" podStartSLOduration=3.104071916 podStartE2EDuration="5.744100414s" podCreationTimestamp="2026-01-22 10:52:28 +0000 UTC" firstStartedPulling="2026-01-22 10:52:30.702635644 +0000 UTC m=+943.360671794" lastFinishedPulling="2026-01-22 10:52:33.342664142 +0000 UTC m=+946.000700292" observedRunningTime="2026-01-22 10:52:33.742755431 +0000 UTC m=+946.400791551" watchObservedRunningTime="2026-01-22 10:52:33.744100414 +0000 UTC m=+946.402136534" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.413570 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vskbv"] Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.415236 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.426938 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vskbv"] Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.508693 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz22p\" (UniqueName: \"kubernetes.io/projected/a490f21d-ca26-48ea-88c7-c3b8ff21995e-kube-api-access-jz22p\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.508747 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-utilities\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.508932 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-catalog-content\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.609984 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz22p\" (UniqueName: \"kubernetes.io/projected/a490f21d-ca26-48ea-88c7-c3b8ff21995e-kube-api-access-jz22p\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.610031 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-utilities\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.610079 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-catalog-content\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.610642 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-catalog-content\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.610670 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-utilities\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.632419 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz22p\" (UniqueName: \"kubernetes.io/projected/a490f21d-ca26-48ea-88c7-c3b8ff21995e-kube-api-access-jz22p\") pod \"community-operators-vskbv\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:37 crc kubenswrapper[4975]: I0122 10:52:37.735953 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:38 crc kubenswrapper[4975]: I0122 10:52:38.049941 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vskbv"] Jan 22 10:52:38 crc kubenswrapper[4975]: I0122 10:52:38.773394 4975 generic.go:334] "Generic (PLEG): container finished" podID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerID="1bd0cc8c960ea514d070992819844b00d9e158704e62c1d5097ea692f0701e8e" exitCode=0 Jan 22 10:52:38 crc kubenswrapper[4975]: I0122 10:52:38.773469 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vskbv" event={"ID":"a490f21d-ca26-48ea-88c7-c3b8ff21995e","Type":"ContainerDied","Data":"1bd0cc8c960ea514d070992819844b00d9e158704e62c1d5097ea692f0701e8e"} Jan 22 10:52:38 crc kubenswrapper[4975]: I0122 10:52:38.773746 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vskbv" event={"ID":"a490f21d-ca26-48ea-88c7-c3b8ff21995e","Type":"ContainerStarted","Data":"abd8c378073cfed7022c42468a70584c0cb593669e1dc7ba82bfed90165df13d"} Jan 22 10:52:39 crc kubenswrapper[4975]: I0122 10:52:39.152820 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:39 crc kubenswrapper[4975]: I0122 10:52:39.152866 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:39 crc kubenswrapper[4975]: I0122 10:52:39.190515 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:39 crc kubenswrapper[4975]: I0122 10:52:39.861685 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.797666 4975 generic.go:334] "Generic (PLEG): container finished" podID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerID="26c0eb1c27c5809b4b743147ef4710f2d6fc7d0b4a34f2d19c0d72cbcaa2b8d4" exitCode=0 Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.797730 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vskbv" event={"ID":"a490f21d-ca26-48ea-88c7-c3b8ff21995e","Type":"ContainerDied","Data":"26c0eb1c27c5809b4b743147ef4710f2d6fc7d0b4a34f2d19c0d72cbcaa2b8d4"} Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.819446 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.820431 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.823171 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-452xl" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.845981 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.847170 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.849560 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rhdjr" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.876935 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.883062 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.883836 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.885837 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mqchr" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.895911 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.919215 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.922795 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.923573 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.929007 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-q7wgk" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.944046 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.944793 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.953412 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wxtdz" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.954871 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9c4l\" (UniqueName: \"kubernetes.io/projected/1cb3165d-a089-4c3d-9d70-73145248cd5c-kube-api-access-h9c4l\") pod \"barbican-operator-controller-manager-59dd8b7cbf-wsw9j\" (UID: \"1cb3165d-a089-4c3d-9d70-73145248cd5c\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.954916 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fh9\" (UniqueName: \"kubernetes.io/projected/17061bf9-682c-4294-950a-18a5c5a00577-kube-api-access-49fh9\") pod \"designate-operator-controller-manager-b45d7bf98-6dln8\" (UID: \"17061bf9-682c-4294-950a-18a5c5a00577\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.954942 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxcn\" (UniqueName: \"kubernetes.io/projected/f1fa565f-67df-4a8f-9f10-33397d530bbb-kube-api-access-ztxcn\") pod \"cinder-operator-controller-manager-69cf5d4557-5w2lh\" (UID: \"f1fa565f-67df-4a8f-9f10-33397d530bbb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.959993 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h"] Jan 22 10:52:40 crc kubenswrapper[4975]: I0122 10:52:40.994432 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.018005 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.018766 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.020423 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-b6qmn" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.037107 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.037822 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.039969 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gw52z" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.040128 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.044443 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.055934 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxthp\" (UniqueName: \"kubernetes.io/projected/9b23e2d7-2430-4770-bb64-b2d8a14c2f7c-kube-api-access-qxthp\") pod \"glance-operator-controller-manager-78fdd796fd-hgw2h\" (UID: \"9b23e2d7-2430-4770-bb64-b2d8a14c2f7c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.055992 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9c4l\" (UniqueName: \"kubernetes.io/projected/1cb3165d-a089-4c3d-9d70-73145248cd5c-kube-api-access-h9c4l\") pod \"barbican-operator-controller-manager-59dd8b7cbf-wsw9j\" (UID: \"1cb3165d-a089-4c3d-9d70-73145248cd5c\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.056046 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcsk\" (UniqueName: \"kubernetes.io/projected/e8365e14-67ba-495f-a996-a2b63c7a391e-kube-api-access-2pcsk\") pod \"heat-operator-controller-manager-594c8c9d5d-kfvm6\" (UID: \"e8365e14-67ba-495f-a996-a2b63c7a391e\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.056105 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49fh9\" (UniqueName: \"kubernetes.io/projected/17061bf9-682c-4294-950a-18a5c5a00577-kube-api-access-49fh9\") pod \"designate-operator-controller-manager-b45d7bf98-6dln8\" (UID: \"17061bf9-682c-4294-950a-18a5c5a00577\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.056126 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxcn\" (UniqueName: \"kubernetes.io/projected/f1fa565f-67df-4a8f-9f10-33397d530bbb-kube-api-access-ztxcn\") pod \"cinder-operator-controller-manager-69cf5d4557-5w2lh\" (UID: \"f1fa565f-67df-4a8f-9f10-33397d530bbb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.074402 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.075417 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.082965 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.083924 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.087733 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-p4k4r" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.097861 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-srlwj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.105388 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9c4l\" (UniqueName: \"kubernetes.io/projected/1cb3165d-a089-4c3d-9d70-73145248cd5c-kube-api-access-h9c4l\") pod \"barbican-operator-controller-manager-59dd8b7cbf-wsw9j\" (UID: \"1cb3165d-a089-4c3d-9d70-73145248cd5c\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.107111 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.117523 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.122001 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fh9\" (UniqueName: \"kubernetes.io/projected/17061bf9-682c-4294-950a-18a5c5a00577-kube-api-access-49fh9\") pod \"designate-operator-controller-manager-b45d7bf98-6dln8\" (UID: \"17061bf9-682c-4294-950a-18a5c5a00577\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.128941 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxcn\" (UniqueName: \"kubernetes.io/projected/f1fa565f-67df-4a8f-9f10-33397d530bbb-kube-api-access-ztxcn\") pod \"cinder-operator-controller-manager-69cf5d4557-5w2lh\" (UID: \"f1fa565f-67df-4a8f-9f10-33397d530bbb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.133676 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.149484 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.170544 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.171193 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.174686 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-hm9fz" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.184320 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.193110 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.194422 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.196463 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xxsg9" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206414 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-688h6\" (UniqueName: \"kubernetes.io/projected/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-kube-api-access-688h6\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206477 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-859js\" (UniqueName: \"kubernetes.io/projected/f1fc9c22-ce13-4bde-8901-08225613757a-kube-api-access-859js\") pod \"keystone-operator-controller-manager-b8b6d4659-jk546\" (UID: \"f1fc9c22-ce13-4bde-8901-08225613757a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206535 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pcsk\" (UniqueName: \"kubernetes.io/projected/e8365e14-67ba-495f-a996-a2b63c7a391e-kube-api-access-2pcsk\") pod \"heat-operator-controller-manager-594c8c9d5d-kfvm6\" (UID: \"e8365e14-67ba-495f-a996-a2b63c7a391e\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206561 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206600 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xsfx\" (UniqueName: \"kubernetes.io/projected/96fb0fc0-a7b6-4607-b879-9bfb9f318742-kube-api-access-8xsfx\") pod \"horizon-operator-controller-manager-77d5c5b54f-g49ht\" (UID: \"96fb0fc0-a7b6-4607-b879-9bfb9f318742\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206733 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbd86\" (UniqueName: \"kubernetes.io/projected/962350e2-bd43-433c-af57-8094db573873-kube-api-access-bbd86\") pod \"ironic-operator-controller-manager-69d6c9f5b8-dm74c\" (UID: \"962350e2-bd43-433c-af57-8094db573873\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.206763 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxthp\" (UniqueName: \"kubernetes.io/projected/9b23e2d7-2430-4770-bb64-b2d8a14c2f7c-kube-api-access-qxthp\") pod \"glance-operator-controller-manager-78fdd796fd-hgw2h\" (UID: \"9b23e2d7-2430-4770-bb64-b2d8a14c2f7c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.207266 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.236120 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.278023 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.278782 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.286789 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-88d97" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.287483 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxthp\" (UniqueName: \"kubernetes.io/projected/9b23e2d7-2430-4770-bb64-b2d8a14c2f7c-kube-api-access-qxthp\") pod \"glance-operator-controller-manager-78fdd796fd-hgw2h\" (UID: \"9b23e2d7-2430-4770-bb64-b2d8a14c2f7c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.287698 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pcsk\" (UniqueName: \"kubernetes.io/projected/e8365e14-67ba-495f-a996-a2b63c7a391e-kube-api-access-2pcsk\") pod \"heat-operator-controller-manager-594c8c9d5d-kfvm6\" (UID: \"e8365e14-67ba-495f-a996-a2b63c7a391e\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.288455 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.298215 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.303729 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.305703 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.308266 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n5png" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.308860 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-688h6\" (UniqueName: \"kubernetes.io/projected/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-kube-api-access-688h6\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.308899 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-859js\" (UniqueName: \"kubernetes.io/projected/f1fc9c22-ce13-4bde-8901-08225613757a-kube-api-access-859js\") pod \"keystone-operator-controller-manager-b8b6d4659-jk546\" (UID: \"f1fc9c22-ce13-4bde-8901-08225613757a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.308929 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.308964 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xsfx\" (UniqueName: \"kubernetes.io/projected/96fb0fc0-a7b6-4607-b879-9bfb9f318742-kube-api-access-8xsfx\") pod \"horizon-operator-controller-manager-77d5c5b54f-g49ht\" (UID: \"96fb0fc0-a7b6-4607-b879-9bfb9f318742\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.309008 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26lm\" (UniqueName: \"kubernetes.io/projected/ff361e81-48c6-4926-84d9-fcca6fba71f6-kube-api-access-s26lm\") pod \"manila-operator-controller-manager-78c6999f6f-48bd6\" (UID: \"ff361e81-48c6-4926-84d9-fcca6fba71f6\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.309055 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbd86\" (UniqueName: \"kubernetes.io/projected/962350e2-bd43-433c-af57-8094db573873-kube-api-access-bbd86\") pod \"ironic-operator-controller-manager-69d6c9f5b8-dm74c\" (UID: \"962350e2-bd43-433c-af57-8094db573873\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.309071 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9b6\" (UniqueName: \"kubernetes.io/projected/a600298c-821b-47b5-9018-8d7ef15267c8-kube-api-access-zz9b6\") pod \"mariadb-operator-controller-manager-c87fff755-qcx4k\" (UID: \"a600298c-821b-47b5-9018-8d7ef15267c8\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.309205 4975 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.309246 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert podName:fe34d1a8-dae0-4b05-b51f-9f577a2ff8de nodeName:}" failed. No retries permitted until 2026-01-22 10:52:41.809230412 +0000 UTC m=+954.467266522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert") pod "infra-operator-controller-manager-54ccf4f85d-s5qw6" (UID: "fe34d1a8-dae0-4b05-b51f-9f577a2ff8de") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.312075 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.328571 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.329679 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.331058 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jqnnp" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.334490 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.342180 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-859js\" (UniqueName: \"kubernetes.io/projected/f1fc9c22-ce13-4bde-8901-08225613757a-kube-api-access-859js\") pod \"keystone-operator-controller-manager-b8b6d4659-jk546\" (UID: \"f1fc9c22-ce13-4bde-8901-08225613757a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.346816 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.347555 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.350124 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jzzgf" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.353791 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xsfx\" (UniqueName: \"kubernetes.io/projected/96fb0fc0-a7b6-4607-b879-9bfb9f318742-kube-api-access-8xsfx\") pod \"horizon-operator-controller-manager-77d5c5b54f-g49ht\" (UID: \"96fb0fc0-a7b6-4607-b879-9bfb9f318742\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.355277 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.356025 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.362480 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.365389 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.365459 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xx56x" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.381536 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.382050 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbd86\" (UniqueName: \"kubernetes.io/projected/962350e2-bd43-433c-af57-8094db573873-kube-api-access-bbd86\") pod \"ironic-operator-controller-manager-69d6c9f5b8-dm74c\" (UID: \"962350e2-bd43-433c-af57-8094db573873\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.382327 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.384837 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-688h6\" (UniqueName: \"kubernetes.io/projected/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-kube-api-access-688h6\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.386720 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p2rhc" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.396395 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.409727 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.411610 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26sk\" (UniqueName: \"kubernetes.io/projected/3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb-kube-api-access-t26sk\") pod \"nova-operator-controller-manager-6b8bc8d87d-4jt9j\" (UID: \"3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.411649 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26lm\" (UniqueName: \"kubernetes.io/projected/ff361e81-48c6-4926-84d9-fcca6fba71f6-kube-api-access-s26lm\") pod \"manila-operator-controller-manager-78c6999f6f-48bd6\" (UID: \"ff361e81-48c6-4926-84d9-fcca6fba71f6\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.411693 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9b6\" (UniqueName: \"kubernetes.io/projected/a600298c-821b-47b5-9018-8d7ef15267c8-kube-api-access-zz9b6\") pod \"mariadb-operator-controller-manager-c87fff755-qcx4k\" (UID: \"a600298c-821b-47b5-9018-8d7ef15267c8\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.411754 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttx52\" (UniqueName: \"kubernetes.io/projected/aa6e0f83-d561-4537-934c-54373200aa1f-kube-api-access-ttx52\") pod \"neutron-operator-controller-manager-5d8f59fb49-27zdj\" (UID: \"aa6e0f83-d561-4537-934c-54373200aa1f\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.422553 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.437849 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26lm\" (UniqueName: \"kubernetes.io/projected/ff361e81-48c6-4926-84d9-fcca6fba71f6-kube-api-access-s26lm\") pod \"manila-operator-controller-manager-78c6999f6f-48bd6\" (UID: \"ff361e81-48c6-4926-84d9-fcca6fba71f6\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.443738 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9b6\" (UniqueName: \"kubernetes.io/projected/a600298c-821b-47b5-9018-8d7ef15267c8-kube-api-access-zz9b6\") pod \"mariadb-operator-controller-manager-c87fff755-qcx4k\" (UID: \"a600298c-821b-47b5-9018-8d7ef15267c8\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.463675 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.464390 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.469062 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-srx6d" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.483106 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.489665 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.507388 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.508164 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.513889 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zntnj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.546925 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.546967 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.547703 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549379 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7qx\" (UniqueName: \"kubernetes.io/projected/9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62-kube-api-access-vr7qx\") pod \"octavia-operator-controller-manager-7bd9774b6-s64bc\" (UID: \"9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549433 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xptjq\" (UniqueName: \"kubernetes.io/projected/834b49cf-0565-4e20-b598-7d01ea144148-kube-api-access-xptjq\") pod \"ovn-operator-controller-manager-55db956ddc-wmzml\" (UID: \"834b49cf-0565-4e20-b598-7d01ea144148\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549460 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549512 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7tzt\" (UniqueName: \"kubernetes.io/projected/177b52ae-55bc-40d1-badb-253927400560-kube-api-access-b7tzt\") pod \"placement-operator-controller-manager-5d646b7d76-pspwb\" (UID: \"177b52ae-55bc-40d1-badb-253927400560\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549589 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttx52\" (UniqueName: \"kubernetes.io/projected/aa6e0f83-d561-4537-934c-54373200aa1f-kube-api-access-ttx52\") pod \"neutron-operator-controller-manager-5d8f59fb49-27zdj\" (UID: \"aa6e0f83-d561-4537-934c-54373200aa1f\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549677 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srkmw\" (UniqueName: \"kubernetes.io/projected/dc8e73d1-2925-43ef-a854-eb93ead085d5-kube-api-access-srkmw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.549724 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t26sk\" (UniqueName: \"kubernetes.io/projected/3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb-kube-api-access-t26sk\") pod \"nova-operator-controller-manager-6b8bc8d87d-4jt9j\" (UID: \"3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.552538 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xkkrw" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.553078 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.555640 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.573579 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.584468 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.585308 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.590523 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fr8gb" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.591073 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.596040 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.607560 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26sk\" (UniqueName: \"kubernetes.io/projected/3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb-kube-api-access-t26sk\") pod \"nova-operator-controller-manager-6b8bc8d87d-4jt9j\" (UID: \"3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.622009 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttx52\" (UniqueName: \"kubernetes.io/projected/aa6e0f83-d561-4537-934c-54373200aa1f-kube-api-access-ttx52\") pod \"neutron-operator-controller-manager-5d8f59fb49-27zdj\" (UID: \"aa6e0f83-d561-4537-934c-54373200aa1f\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.627209 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.627874 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.628891 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.634586 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.635969 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9zgv5" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.636188 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.643213 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.652940 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srkmw\" (UniqueName: \"kubernetes.io/projected/dc8e73d1-2925-43ef-a854-eb93ead085d5-kube-api-access-srkmw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653106 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvq4s\" (UniqueName: \"kubernetes.io/projected/17acd27c-28fa-4f87-a3e3-c0c63adeeb9a-kube-api-access-wvq4s\") pod \"test-operator-controller-manager-69797bbcbd-fpxsm\" (UID: \"17acd27c-28fa-4f87-a3e3-c0c63adeeb9a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653183 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr7qx\" (UniqueName: \"kubernetes.io/projected/9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62-kube-api-access-vr7qx\") pod \"octavia-operator-controller-manager-7bd9774b6-s64bc\" (UID: \"9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653269 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653363 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xptjq\" (UniqueName: \"kubernetes.io/projected/834b49cf-0565-4e20-b598-7d01ea144148-kube-api-access-xptjq\") pod \"ovn-operator-controller-manager-55db956ddc-wmzml\" (UID: \"834b49cf-0565-4e20-b598-7d01ea144148\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653432 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653494 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653570 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7tzt\" (UniqueName: \"kubernetes.io/projected/177b52ae-55bc-40d1-badb-253927400560-kube-api-access-b7tzt\") pod \"placement-operator-controller-manager-5d646b7d76-pspwb\" (UID: \"177b52ae-55bc-40d1-badb-253927400560\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653643 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjd9p\" (UniqueName: \"kubernetes.io/projected/88fd0de4-4508-450b-97b4-3206c50fb0a2-kube-api-access-gjd9p\") pod \"watcher-operator-controller-manager-5ffb9c6597-ddrzg\" (UID: \"88fd0de4-4508-450b-97b4-3206c50fb0a2\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653715 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq5t5\" (UniqueName: \"kubernetes.io/projected/2203164e-ced5-46bd-85c1-92eb94c7fb9f-kube-api-access-dq5t5\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653791 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6pf\" (UniqueName: \"kubernetes.io/projected/c6673acf-6a08-449e-b67a-e1245dbe2206-kube-api-access-lw6pf\") pod \"telemetry-operator-controller-manager-85cd9769bb-l8cz4\" (UID: \"c6673acf-6a08-449e-b67a-e1245dbe2206\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.653890 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bwv\" (UniqueName: \"kubernetes.io/projected/ea739801-0b35-4cd3-a198-a58e93059f88-kube-api-access-r7bwv\") pod \"swift-operator-controller-manager-547cbdb99f-x2kk6\" (UID: \"ea739801-0b35-4cd3-a198-a58e93059f88\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.654498 4975 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.654580 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert podName:dc8e73d1-2925-43ef-a854-eb93ead085d5 nodeName:}" failed. No retries permitted until 2026-01-22 10:52:42.154559428 +0000 UTC m=+954.812595608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" (UID: "dc8e73d1-2925-43ef-a854-eb93ead085d5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.655441 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.693832 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.699400 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.714225 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7tzt\" (UniqueName: \"kubernetes.io/projected/177b52ae-55bc-40d1-badb-253927400560-kube-api-access-b7tzt\") pod \"placement-operator-controller-manager-5d646b7d76-pspwb\" (UID: \"177b52ae-55bc-40d1-badb-253927400560\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.719874 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pc27q"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.721957 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr7qx\" (UniqueName: \"kubernetes.io/projected/9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62-kube-api-access-vr7qx\") pod \"octavia-operator-controller-manager-7bd9774b6-s64bc\" (UID: \"9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.722099 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srkmw\" (UniqueName: \"kubernetes.io/projected/dc8e73d1-2925-43ef-a854-eb93ead085d5-kube-api-access-srkmw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.722516 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xptjq\" (UniqueName: \"kubernetes.io/projected/834b49cf-0565-4e20-b598-7d01ea144148-kube-api-access-xptjq\") pod \"ovn-operator-controller-manager-55db956ddc-wmzml\" (UID: \"834b49cf-0565-4e20-b598-7d01ea144148\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.741535 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.742720 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.745482 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f"] Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.748395 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.748848 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-q5b94" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.755675 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjd9p\" (UniqueName: \"kubernetes.io/projected/88fd0de4-4508-450b-97b4-3206c50fb0a2-kube-api-access-gjd9p\") pod \"watcher-operator-controller-manager-5ffb9c6597-ddrzg\" (UID: \"88fd0de4-4508-450b-97b4-3206c50fb0a2\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.755842 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq5t5\" (UniqueName: \"kubernetes.io/projected/2203164e-ced5-46bd-85c1-92eb94c7fb9f-kube-api-access-dq5t5\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.755920 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6pf\" (UniqueName: \"kubernetes.io/projected/c6673acf-6a08-449e-b67a-e1245dbe2206-kube-api-access-lw6pf\") pod \"telemetry-operator-controller-manager-85cd9769bb-l8cz4\" (UID: \"c6673acf-6a08-449e-b67a-e1245dbe2206\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.756031 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bwv\" (UniqueName: \"kubernetes.io/projected/ea739801-0b35-4cd3-a198-a58e93059f88-kube-api-access-r7bwv\") pod \"swift-operator-controller-manager-547cbdb99f-x2kk6\" (UID: \"ea739801-0b35-4cd3-a198-a58e93059f88\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.756138 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvq4s\" (UniqueName: \"kubernetes.io/projected/17acd27c-28fa-4f87-a3e3-c0c63adeeb9a-kube-api-access-wvq4s\") pod \"test-operator-controller-manager-69797bbcbd-fpxsm\" (UID: \"17acd27c-28fa-4f87-a3e3-c0c63adeeb9a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.756221 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.756297 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.756403 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qksg9\" (UniqueName: \"kubernetes.io/projected/d7834e51-fb9e-4f97-a385-dd98869ece3b-kube-api-access-qksg9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kqh4f\" (UID: \"d7834e51-fb9e-4f97-a385-dd98869ece3b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.756600 4975 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.756667 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:42.256638279 +0000 UTC m=+954.914674389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "metrics-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.756825 4975 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.756915 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:42.256900196 +0000 UTC m=+954.914936296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.777775 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvq4s\" (UniqueName: \"kubernetes.io/projected/17acd27c-28fa-4f87-a3e3-c0c63adeeb9a-kube-api-access-wvq4s\") pod \"test-operator-controller-manager-69797bbcbd-fpxsm\" (UID: \"17acd27c-28fa-4f87-a3e3-c0c63adeeb9a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.782976 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjd9p\" (UniqueName: \"kubernetes.io/projected/88fd0de4-4508-450b-97b4-3206c50fb0a2-kube-api-access-gjd9p\") pod \"watcher-operator-controller-manager-5ffb9c6597-ddrzg\" (UID: \"88fd0de4-4508-450b-97b4-3206c50fb0a2\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.783075 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq5t5\" (UniqueName: \"kubernetes.io/projected/2203164e-ced5-46bd-85c1-92eb94c7fb9f-kube-api-access-dq5t5\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.786600 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6pf\" (UniqueName: \"kubernetes.io/projected/c6673acf-6a08-449e-b67a-e1245dbe2206-kube-api-access-lw6pf\") pod \"telemetry-operator-controller-manager-85cd9769bb-l8cz4\" (UID: \"c6673acf-6a08-449e-b67a-e1245dbe2206\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.793837 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.796112 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bwv\" (UniqueName: \"kubernetes.io/projected/ea739801-0b35-4cd3-a198-a58e93059f88-kube-api-access-r7bwv\") pod \"swift-operator-controller-manager-547cbdb99f-x2kk6\" (UID: \"ea739801-0b35-4cd3-a198-a58e93059f88\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.809788 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pc27q" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="registry-server" containerID="cri-o://75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7" gracePeriod=2 Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.815632 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.847172 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.862709 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qksg9\" (UniqueName: \"kubernetes.io/projected/d7834e51-fb9e-4f97-a385-dd98869ece3b-kube-api-access-qksg9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kqh4f\" (UID: \"d7834e51-fb9e-4f97-a385-dd98869ece3b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.862803 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.863013 4975 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: E0122 10:52:41.863089 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert podName:fe34d1a8-dae0-4b05-b51f-9f577a2ff8de nodeName:}" failed. No retries permitted until 2026-01-22 10:52:42.863068989 +0000 UTC m=+955.521105099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert") pod "infra-operator-controller-manager-54ccf4f85d-s5qw6" (UID: "fe34d1a8-dae0-4b05-b51f-9f577a2ff8de") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.881830 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.896752 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qksg9\" (UniqueName: \"kubernetes.io/projected/d7834e51-fb9e-4f97-a385-dd98869ece3b-kube-api-access-qksg9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kqh4f\" (UID: \"d7834e51-fb9e-4f97-a385-dd98869ece3b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.909096 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:52:41 crc kubenswrapper[4975]: I0122 10:52:41.941797 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.003044 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.104656 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.126695 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.168471 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.168872 4975 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.168914 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert podName:dc8e73d1-2925-43ef-a854-eb93ead085d5 nodeName:}" failed. No retries permitted until 2026-01-22 10:52:43.168901571 +0000 UTC m=+955.826937681 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" (UID: "dc8e73d1-2925-43ef-a854-eb93ead085d5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: W0122 10:52:42.187075 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17061bf9_682c_4294_950a_18a5c5a00577.slice/crio-30e40ca7565e0da08de2d1460acbaaf96c80e3e569e6b1e9f6652b9b9b6a24c8 WatchSource:0}: Error finding container 30e40ca7565e0da08de2d1460acbaaf96c80e3e569e6b1e9f6652b9b9b6a24c8: Status 404 returned error can't find the container with id 30e40ca7565e0da08de2d1460acbaaf96c80e3e569e6b1e9f6652b9b9b6a24c8 Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.270209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.270274 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.270362 4975 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.270421 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:43.270403588 +0000 UTC m=+955.928439708 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "metrics-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.270485 4975 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.270536 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:43.27051063 +0000 UTC m=+955.928546740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "webhook-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.495829 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.646600 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.655250 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc"] Jan 22 10:52:42 crc kubenswrapper[4975]: W0122 10:52:42.690503 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c9f78cf_4a34_4dd9_9d0a_feb77ca25b62.slice/crio-f33016057aef129a46870b4df7623e876e54250c52d69736f5eb9b0cecce362d WatchSource:0}: Error finding container f33016057aef129a46870b4df7623e876e54250c52d69736f5eb9b0cecce362d: Status 404 returned error can't find the container with id f33016057aef129a46870b4df7623e876e54250c52d69736f5eb9b0cecce362d Jan 22 10:52:42 crc kubenswrapper[4975]: W0122 10:52:42.691764 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96fb0fc0_a7b6_4607_b879_9bfb9f318742.slice/crio-1765a707906ae372d4228d99e5d0cbf814060393ee14502c26ae2db83cb384d8 WatchSource:0}: Error finding container 1765a707906ae372d4228d99e5d0cbf814060393ee14502c26ae2db83cb384d8: Status 404 returned error can't find the container with id 1765a707906ae372d4228d99e5d0cbf814060393ee14502c26ae2db83cb384d8 Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.720431 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.728784 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.739956 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj"] Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.825126 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" event={"ID":"9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62","Type":"ContainerStarted","Data":"f33016057aef129a46870b4df7623e876e54250c52d69736f5eb9b0cecce362d"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.827405 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" event={"ID":"f1fa565f-67df-4a8f-9f10-33397d530bbb","Type":"ContainerStarted","Data":"9b062edf9bb525b08bed7cebc78e1a19d6ea6b662c37778939372aecd789802d"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.829063 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" event={"ID":"f1fc9c22-ce13-4bde-8901-08225613757a","Type":"ContainerStarted","Data":"199632bcf89ff710930337d933e9813c093505083461ee64f2882867a1b2cfc4"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.829871 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" event={"ID":"17061bf9-682c-4294-950a-18a5c5a00577","Type":"ContainerStarted","Data":"30e40ca7565e0da08de2d1460acbaaf96c80e3e569e6b1e9f6652b9b9b6a24c8"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.830551 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" event={"ID":"1cb3165d-a089-4c3d-9d70-73145248cd5c","Type":"ContainerStarted","Data":"0d1016d31c57377ad02b5c2d46e08005df567e1a415742df11ed03be4b9e920b"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.831240 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" event={"ID":"aa6e0f83-d561-4537-934c-54373200aa1f","Type":"ContainerStarted","Data":"3459ac2d9063e1ce45fc7b084f1c02f195d4dac9b108d060882c6dde47cda95a"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.831959 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" event={"ID":"96fb0fc0-a7b6-4607-b879-9bfb9f318742","Type":"ContainerStarted","Data":"1765a707906ae372d4228d99e5d0cbf814060393ee14502c26ae2db83cb384d8"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.832582 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" event={"ID":"ff361e81-48c6-4926-84d9-fcca6fba71f6","Type":"ContainerStarted","Data":"c8bcffc6a45f637f23d85e572c9cc296ef7887959c4c24f8dd12b611d95e3137"} Jan 22 10:52:42 crc kubenswrapper[4975]: I0122 10:52:42.878511 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.878620 4975 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:42 crc kubenswrapper[4975]: E0122 10:52:42.878678 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert podName:fe34d1a8-dae0-4b05-b51f-9f577a2ff8de nodeName:}" failed. No retries permitted until 2026-01-22 10:52:44.878662663 +0000 UTC m=+957.536698773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert") pod "infra-operator-controller-manager-54ccf4f85d-s5qw6" (UID: "fe34d1a8-dae0-4b05-b51f-9f577a2ff8de") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.062209 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.068135 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.081552 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm"] Jan 22 10:52:43 crc kubenswrapper[4975]: W0122 10:52:43.086186 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17acd27c_28fa_4f87_a3e3_c0c63adeeb9a.slice/crio-0b554afd8525f952af39ae2fd796955ac06b399a4ee2e293ff7ce0c875171ecc WatchSource:0}: Error finding container 0b554afd8525f952af39ae2fd796955ac06b399a4ee2e293ff7ce0c875171ecc: Status 404 returned error can't find the container with id 0b554afd8525f952af39ae2fd796955ac06b399a4ee2e293ff7ce0c875171ecc Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.090824 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.100210 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h"] Jan 22 10:52:43 crc kubenswrapper[4975]: W0122 10:52:43.100698 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b23e2d7_2430_4770_bb64_b2d8a14c2f7c.slice/crio-419dff9404761382efd3f59775dd524c4675a27041c706887b1ef804fb280425 WatchSource:0}: Error finding container 419dff9404761382efd3f59775dd524c4675a27041c706887b1ef804fb280425: Status 404 returned error can't find the container with id 419dff9404761382efd3f59775dd524c4675a27041c706887b1ef804fb280425 Jan 22 10:52:43 crc kubenswrapper[4975]: W0122 10:52:43.100874 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962350e2_bd43_433c_af57_8094db573873.slice/crio-2de95a05bd92190b38983f559026effda1c42000e9b82797fc0230a56df03552 WatchSource:0}: Error finding container 2de95a05bd92190b38983f559026effda1c42000e9b82797fc0230a56df03552: Status 404 returned error can't find the container with id 2de95a05bd92190b38983f559026effda1c42000e9b82797fc0230a56df03552 Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.119507 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.124860 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb"] Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.132002 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zz9b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-qcx4k_openstack-operators(a600298c-821b-47b5-9018-8d7ef15267c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.132018 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2pcsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-kfvm6_openstack-operators(e8365e14-67ba-495f-a996-a2b63c7a391e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.132142 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7tzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-pspwb_openstack-operators(177b52ae-55bc-40d1-badb-253927400560): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.132501 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbd86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-dm74c_openstack-operators(962350e2-bd43-433c-af57-8094db573873): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.133753 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4"] Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.133769 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" podUID="e8365e14-67ba-495f-a996-a2b63c7a391e" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.133802 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" podUID="a600298c-821b-47b5-9018-8d7ef15267c8" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.133822 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" podUID="177b52ae-55bc-40d1-badb-253927400560" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.133838 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" podUID="962350e2-bd43-433c-af57-8094db573873" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.139632 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qksg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-kqh4f_openstack-operators(d7834e51-fb9e-4f97-a385-dd98869ece3b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.139927 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c"] Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.141500 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" podUID="d7834e51-fb9e-4f97-a385-dd98869ece3b" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.157908 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gjd9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-ddrzg_openstack-operators(88fd0de4-4508-450b-97b4-3206c50fb0a2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.158990 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" podUID="88fd0de4-4508-450b-97b4-3206c50fb0a2" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.164768 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.183125 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.183882 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.184053 4975 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.184099 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert podName:dc8e73d1-2925-43ef-a854-eb93ead085d5 nodeName:}" failed. No retries permitted until 2026-01-22 10:52:45.184084264 +0000 UTC m=+957.842120374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" (UID: "dc8e73d1-2925-43ef-a854-eb93ead085d5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.193574 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg"] Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.285279 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.285337 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.285451 4975 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.285513 4975 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.285524 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:45.285507029 +0000 UTC m=+957.943543129 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "metrics-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.285550 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:45.28553972 +0000 UTC m=+957.943575830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "webhook-server-cert" not found Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.401759 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.589053 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-catalog-content\") pod \"d4ce709d-306c-4050-87f9-347861884dbc\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.589132 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj8p7\" (UniqueName: \"kubernetes.io/projected/d4ce709d-306c-4050-87f9-347861884dbc-kube-api-access-hj8p7\") pod \"d4ce709d-306c-4050-87f9-347861884dbc\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.589151 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-utilities\") pod \"d4ce709d-306c-4050-87f9-347861884dbc\" (UID: \"d4ce709d-306c-4050-87f9-347861884dbc\") " Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.590250 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-utilities" (OuterVolumeSpecName: "utilities") pod "d4ce709d-306c-4050-87f9-347861884dbc" (UID: "d4ce709d-306c-4050-87f9-347861884dbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.604750 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ce709d-306c-4050-87f9-347861884dbc-kube-api-access-hj8p7" (OuterVolumeSpecName: "kube-api-access-hj8p7") pod "d4ce709d-306c-4050-87f9-347861884dbc" (UID: "d4ce709d-306c-4050-87f9-347861884dbc"). InnerVolumeSpecName "kube-api-access-hj8p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.634389 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4ce709d-306c-4050-87f9-347861884dbc" (UID: "d4ce709d-306c-4050-87f9-347861884dbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.691295 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj8p7\" (UniqueName: \"kubernetes.io/projected/d4ce709d-306c-4050-87f9-347861884dbc-kube-api-access-hj8p7\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.691351 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.691367 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ce709d-306c-4050-87f9-347861884dbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.905857 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" event={"ID":"3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb","Type":"ContainerStarted","Data":"dfc1c69c126188d042ae2c7392f226a8e82faa80984f3bd95a8ce22e88dc30c7"} Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.918149 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" event={"ID":"834b49cf-0565-4e20-b598-7d01ea144148","Type":"ContainerStarted","Data":"f1922009479b1be8046ecff4aadee7e3a09d1d75ae08fec84254b832de31a326"} Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.927493 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" event={"ID":"17acd27c-28fa-4f87-a3e3-c0c63adeeb9a","Type":"ContainerStarted","Data":"0b554afd8525f952af39ae2fd796955ac06b399a4ee2e293ff7ce0c875171ecc"} Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.935772 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" event={"ID":"177b52ae-55bc-40d1-badb-253927400560","Type":"ContainerStarted","Data":"d2f6bc5bcc06d9b665e3e5b86906b8568bab6296384159d1c1510643c5260822"} Jan 22 10:52:43 crc kubenswrapper[4975]: E0122 10:52:43.938776 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" podUID="177b52ae-55bc-40d1-badb-253927400560" Jan 22 10:52:43 crc kubenswrapper[4975]: I0122 10:52:43.945509 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" event={"ID":"9b23e2d7-2430-4770-bb64-b2d8a14c2f7c","Type":"ContainerStarted","Data":"419dff9404761382efd3f59775dd524c4675a27041c706887b1ef804fb280425"} Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.000833 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vskbv" event={"ID":"a490f21d-ca26-48ea-88c7-c3b8ff21995e","Type":"ContainerStarted","Data":"ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b"} Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.010900 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" event={"ID":"c6673acf-6a08-449e-b67a-e1245dbe2206","Type":"ContainerStarted","Data":"b990d03e2360cba59fd270cc65a28933433a7ec2bbdc09faf982efccd619c6d1"} Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.050100 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" event={"ID":"a600298c-821b-47b5-9018-8d7ef15267c8","Type":"ContainerStarted","Data":"15bce7e1c2ca836ea791746c29904f38e8744d1f657a96eac969fa61b740a6bf"} Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.051058 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" podUID="a600298c-821b-47b5-9018-8d7ef15267c8" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.052604 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" event={"ID":"88fd0de4-4508-450b-97b4-3206c50fb0a2","Type":"ContainerStarted","Data":"eb93fb7c1c2cf923f1891da680bf01a3f7e4ac1c59ad1cea2a59b141798f7f72"} Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.053967 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" podUID="88fd0de4-4508-450b-97b4-3206c50fb0a2" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.054588 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" event={"ID":"e8365e14-67ba-495f-a996-a2b63c7a391e","Type":"ContainerStarted","Data":"9f66d6008ddb0b35b4a9f00b580e896b0551e6e9c6ea151aeb787f1ba5389c2a"} Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.055730 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" podUID="e8365e14-67ba-495f-a996-a2b63c7a391e" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.058970 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" event={"ID":"ea739801-0b35-4cd3-a198-a58e93059f88","Type":"ContainerStarted","Data":"4dc3df096bd6eeb14714bb2838ea66f2b94b71f16703188e33b203b4833220f9"} Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.075425 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vskbv" podStartSLOduration=4.565240663 podStartE2EDuration="7.075405227s" podCreationTimestamp="2026-01-22 10:52:37 +0000 UTC" firstStartedPulling="2026-01-22 10:52:38.776311101 +0000 UTC m=+951.434347201" lastFinishedPulling="2026-01-22 10:52:41.286475655 +0000 UTC m=+953.944511765" observedRunningTime="2026-01-22 10:52:44.034635707 +0000 UTC m=+956.692671817" watchObservedRunningTime="2026-01-22 10:52:44.075405227 +0000 UTC m=+956.733441337" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.089633 4975 generic.go:334] "Generic (PLEG): container finished" podID="d4ce709d-306c-4050-87f9-347861884dbc" containerID="75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7" exitCode=0 Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.089732 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc27q" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.089734 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerDied","Data":"75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7"} Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.089896 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc27q" event={"ID":"d4ce709d-306c-4050-87f9-347861884dbc","Type":"ContainerDied","Data":"1362651f090f673b762f128900d9fde835ea5022eec4a7eb650c45c8cf0b6243"} Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.089934 4975 scope.go:117] "RemoveContainer" containerID="75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.092186 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" event={"ID":"d7834e51-fb9e-4f97-a385-dd98869ece3b","Type":"ContainerStarted","Data":"bdf6e6bb925ae6af83d4f8482a8db74ad3b72e39ecbdc5f762c28f9c24330355"} Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.094720 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" podUID="d7834e51-fb9e-4f97-a385-dd98869ece3b" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.095489 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" event={"ID":"962350e2-bd43-433c-af57-8094db573873","Type":"ContainerStarted","Data":"2de95a05bd92190b38983f559026effda1c42000e9b82797fc0230a56df03552"} Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.102054 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" podUID="962350e2-bd43-433c-af57-8094db573873" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.130771 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pc27q"] Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.145196 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pc27q"] Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.169840 4975 scope.go:117] "RemoveContainer" containerID="7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.212562 4975 scope.go:117] "RemoveContainer" containerID="b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.251892 4975 scope.go:117] "RemoveContainer" containerID="75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7" Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.254403 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7\": container with ID starting with 75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7 not found: ID does not exist" containerID="75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.254457 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7"} err="failed to get container status \"75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7\": rpc error: code = NotFound desc = could not find container \"75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7\": container with ID starting with 75e303c5fcb8ecaf2d1a1b30d4cb0815d26c85b5b046aa774c8f31f5b0f2f7b7 not found: ID does not exist" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.254477 4975 scope.go:117] "RemoveContainer" containerID="7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b" Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.254798 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b\": container with ID starting with 7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b not found: ID does not exist" containerID="7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.254821 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b"} err="failed to get container status \"7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b\": rpc error: code = NotFound desc = could not find container \"7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b\": container with ID starting with 7dd3a4082d99a8aae94fcd8ba562f5029b16392edb56946fccb65174595c848b not found: ID does not exist" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.254836 4975 scope.go:117] "RemoveContainer" containerID="b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737" Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.255106 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737\": container with ID starting with b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737 not found: ID does not exist" containerID="b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.255127 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737"} err="failed to get container status \"b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737\": rpc error: code = NotFound desc = could not find container \"b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737\": container with ID starting with b4a3475f54707e431d1d02eadc4890ce37d2f106970f96130a6b60d72bfbc737 not found: ID does not exist" Jan 22 10:52:44 crc kubenswrapper[4975]: I0122 10:52:44.916459 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.917561 4975 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:44 crc kubenswrapper[4975]: E0122 10:52:44.917620 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert podName:fe34d1a8-dae0-4b05-b51f-9f577a2ff8de nodeName:}" failed. No retries permitted until 2026-01-22 10:52:48.917602007 +0000 UTC m=+961.575638117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert") pod "infra-operator-controller-manager-54ccf4f85d-s5qw6" (UID: "fe34d1a8-dae0-4b05-b51f-9f577a2ff8de") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.107889 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" podUID="a600298c-821b-47b5-9018-8d7ef15267c8" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.108482 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" podUID="d7834e51-fb9e-4f97-a385-dd98869ece3b" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.108613 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" podUID="962350e2-bd43-433c-af57-8094db573873" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.108728 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" podUID="e8365e14-67ba-495f-a996-a2b63c7a391e" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.108739 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" podUID="88fd0de4-4508-450b-97b4-3206c50fb0a2" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.108776 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" podUID="177b52ae-55bc-40d1-badb-253927400560" Jan 22 10:52:45 crc kubenswrapper[4975]: I0122 10:52:45.222784 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.224280 4975 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.224339 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert podName:dc8e73d1-2925-43ef-a854-eb93ead085d5 nodeName:}" failed. No retries permitted until 2026-01-22 10:52:49.224312442 +0000 UTC m=+961.882348552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" (UID: "dc8e73d1-2925-43ef-a854-eb93ead085d5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: I0122 10:52:45.327972 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.328158 4975 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: I0122 10:52:45.329552 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.329689 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:49.329662139 +0000 UTC m=+961.987698249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "metrics-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.329739 4975 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: E0122 10:52:45.329811 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:49.329792032 +0000 UTC m=+961.987828202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "webhook-server-cert" not found Jan 22 10:52:45 crc kubenswrapper[4975]: I0122 10:52:45.771073 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ce709d-306c-4050-87f9-347861884dbc" path="/var/lib/kubelet/pods/d4ce709d-306c-4050-87f9-347861884dbc/volumes" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.736914 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.737628 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.811057 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.833019 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hzlh5"] Jan 22 10:52:47 crc kubenswrapper[4975]: E0122 10:52:47.833469 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="extract-utilities" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.833494 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="extract-utilities" Jan 22 10:52:47 crc kubenswrapper[4975]: E0122 10:52:47.833525 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="registry-server" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.833536 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="registry-server" Jan 22 10:52:47 crc kubenswrapper[4975]: E0122 10:52:47.833561 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="extract-content" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.833584 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="extract-content" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.833814 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ce709d-306c-4050-87f9-347861884dbc" containerName="registry-server" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.835129 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.844124 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzlh5"] Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.967905 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-catalog-content\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.967946 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-utilities\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:47 crc kubenswrapper[4975]: I0122 10:52:47.968089 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4pl4\" (UniqueName: \"kubernetes.io/projected/7daf3043-6acd-4038-b576-f626c6ddcb52-kube-api-access-b4pl4\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.070442 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-catalog-content\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.070505 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-utilities\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.070633 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4pl4\" (UniqueName: \"kubernetes.io/projected/7daf3043-6acd-4038-b576-f626c6ddcb52-kube-api-access-b4pl4\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.079543 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-utilities\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.079687 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-catalog-content\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.097513 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4pl4\" (UniqueName: \"kubernetes.io/projected/7daf3043-6acd-4038-b576-f626c6ddcb52-kube-api-access-b4pl4\") pod \"redhat-marketplace-hzlh5\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.162014 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.176972 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:52:48 crc kubenswrapper[4975]: I0122 10:52:48.981403 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:48 crc kubenswrapper[4975]: E0122 10:52:48.981774 4975 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:48 crc kubenswrapper[4975]: E0122 10:52:48.981818 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert podName:fe34d1a8-dae0-4b05-b51f-9f577a2ff8de nodeName:}" failed. No retries permitted until 2026-01-22 10:52:56.98180492 +0000 UTC m=+969.639841030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert") pod "infra-operator-controller-manager-54ccf4f85d-s5qw6" (UID: "fe34d1a8-dae0-4b05-b51f-9f577a2ff8de") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:52:49 crc kubenswrapper[4975]: I0122 10:52:49.286307 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:49 crc kubenswrapper[4975]: E0122 10:52:49.286836 4975 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:49 crc kubenswrapper[4975]: E0122 10:52:49.286910 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert podName:dc8e73d1-2925-43ef-a854-eb93ead085d5 nodeName:}" failed. No retries permitted until 2026-01-22 10:52:57.286889651 +0000 UTC m=+969.944925841 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" (UID: "dc8e73d1-2925-43ef-a854-eb93ead085d5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:52:49 crc kubenswrapper[4975]: I0122 10:52:49.387775 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:49 crc kubenswrapper[4975]: I0122 10:52:49.387863 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:49 crc kubenswrapper[4975]: E0122 10:52:49.388037 4975 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:52:49 crc kubenswrapper[4975]: E0122 10:52:49.388132 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:57.388105851 +0000 UTC m=+970.046141991 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "metrics-server-cert" not found Jan 22 10:52:49 crc kubenswrapper[4975]: E0122 10:52:49.388168 4975 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:52:49 crc kubenswrapper[4975]: E0122 10:52:49.388280 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs podName:2203164e-ced5-46bd-85c1-92eb94c7fb9f nodeName:}" failed. No retries permitted until 2026-01-22 10:52:57.388252365 +0000 UTC m=+970.046288515 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs") pod "openstack-operator-controller-manager-fcdd7b6c4-lq88q" (UID: "2203164e-ced5-46bd-85c1-92eb94c7fb9f") : secret "webhook-server-cert" not found Jan 22 10:52:50 crc kubenswrapper[4975]: I0122 10:52:50.412114 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vskbv"] Jan 22 10:52:51 crc kubenswrapper[4975]: I0122 10:52:51.149886 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vskbv" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="registry-server" containerID="cri-o://ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" gracePeriod=2 Jan 22 10:52:52 crc kubenswrapper[4975]: I0122 10:52:52.157784 4975 generic.go:334] "Generic (PLEG): container finished" podID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" exitCode=0 Jan 22 10:52:52 crc kubenswrapper[4975]: I0122 10:52:52.157831 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vskbv" event={"ID":"a490f21d-ca26-48ea-88c7-c3b8ff21995e","Type":"ContainerDied","Data":"ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b"} Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.003130 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.016124 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe34d1a8-dae0-4b05-b51f-9f577a2ff8de-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-s5qw6\" (UID: \"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.260473 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gw52z" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.269037 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.308452 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.317547 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc8e73d1-2925-43ef-a854-eb93ead085d5-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7\" (UID: \"dc8e73d1-2925-43ef-a854-eb93ead085d5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.410681 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.410779 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.416101 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-metrics-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.416571 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2203164e-ced5-46bd-85c1-92eb94c7fb9f-webhook-certs\") pod \"openstack-operator-controller-manager-fcdd7b6c4-lq88q\" (UID: \"2203164e-ced5-46bd-85c1-92eb94c7fb9f\") " pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.436818 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.437241 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.437297 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.438058 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16a708e48cbc5f721c35b9e8acfcd4898f3835379f880a9ee0b3d9dc8b03fa83"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.438135 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://16a708e48cbc5f721c35b9e8acfcd4898f3835379f880a9ee0b3d9dc8b03fa83" gracePeriod=600 Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.559647 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9zgv5" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.567726 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.601845 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xx56x" Jan 22 10:52:57 crc kubenswrapper[4975]: I0122 10:52:57.609409 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.683559 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.683756 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lw6pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-l8cz4_openstack-operators(c6673acf-6a08-449e-b67a-e1245dbe2206): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.684996 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" podUID="c6673acf-6a08-449e-b67a-e1245dbe2206" Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.737478 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.737952 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.738212 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:52:57 crc kubenswrapper[4975]: E0122 10:52:57.738245 4975 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-vskbv" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="registry-server" Jan 22 10:52:58 crc kubenswrapper[4975]: I0122 10:52:58.212472 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="16a708e48cbc5f721c35b9e8acfcd4898f3835379f880a9ee0b3d9dc8b03fa83" exitCode=0 Jan 22 10:52:58 crc kubenswrapper[4975]: I0122 10:52:58.213247 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"16a708e48cbc5f721c35b9e8acfcd4898f3835379f880a9ee0b3d9dc8b03fa83"} Jan 22 10:52:58 crc kubenswrapper[4975]: I0122 10:52:58.213293 4975 scope.go:117] "RemoveContainer" containerID="a53aa8903fe7a0da9b7319f66f37bade3977b1a8441ec861195439bca40661fa" Jan 22 10:52:58 crc kubenswrapper[4975]: E0122 10:52:58.216580 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" podUID="c6673acf-6a08-449e-b67a-e1245dbe2206" Jan 22 10:52:58 crc kubenswrapper[4975]: E0122 10:52:58.315176 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 22 10:52:58 crc kubenswrapper[4975]: E0122 10:52:58.315405 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxthp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-hgw2h_openstack-operators(9b23e2d7-2430-4770-bb64-b2d8a14c2f7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:52:58 crc kubenswrapper[4975]: E0122 10:52:58.316669 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" podUID="9b23e2d7-2430-4770-bb64-b2d8a14c2f7c" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.045450 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.045936 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7bwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-x2kk6_openstack-operators(ea739801-0b35-4cd3-a198-a58e93059f88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.047150 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" podUID="ea739801-0b35-4cd3-a198-a58e93059f88" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.226124 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" podUID="ea739801-0b35-4cd3-a198-a58e93059f88" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.229057 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" podUID="9b23e2d7-2430-4770-bb64-b2d8a14c2f7c" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.727057 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:e5e017be64edd679623ea1b7e6a1ae780fdcee4ef79be989b93d8c1d082da15b" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.727209 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:e5e017be64edd679623ea1b7e6a1ae780fdcee4ef79be989b93d8c1d082da15b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9c4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-59dd8b7cbf-wsw9j_openstack-operators(1cb3165d-a089-4c3d-9d70-73145248cd5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:52:59 crc kubenswrapper[4975]: E0122 10:52:59.728533 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" podUID="1cb3165d-a089-4c3d-9d70-73145248cd5c" Jan 22 10:53:00 crc kubenswrapper[4975]: E0122 10:53:00.235828 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:e5e017be64edd679623ea1b7e6a1ae780fdcee4ef79be989b93d8c1d082da15b\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" podUID="1cb3165d-a089-4c3d-9d70-73145248cd5c" Jan 22 10:53:00 crc kubenswrapper[4975]: E0122 10:53:00.442634 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 22 10:53:00 crc kubenswrapper[4975]: E0122 10:53:00.442832 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s26lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-48bd6_openstack-operators(ff361e81-48c6-4926-84d9-fcca6fba71f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:00 crc kubenswrapper[4975]: E0122 10:53:00.444113 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" podUID="ff361e81-48c6-4926-84d9-fcca6fba71f6" Jan 22 10:53:01 crc kubenswrapper[4975]: E0122 10:53:01.044891 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 22 10:53:01 crc kubenswrapper[4975]: E0122 10:53:01.045428 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-wmzml_openstack-operators(834b49cf-0565-4e20-b598-7d01ea144148): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:01 crc kubenswrapper[4975]: E0122 10:53:01.046671 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" podUID="834b49cf-0565-4e20-b598-7d01ea144148" Jan 22 10:53:01 crc kubenswrapper[4975]: E0122 10:53:01.244159 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" podUID="ff361e81-48c6-4926-84d9-fcca6fba71f6" Jan 22 10:53:01 crc kubenswrapper[4975]: E0122 10:53:01.244676 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" podUID="834b49cf-0565-4e20-b598-7d01ea144148" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.074481 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.075073 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8xsfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-g49ht_openstack-operators(96fb0fc0-a7b6-4607-b879-9bfb9f318742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.076469 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" podUID="96fb0fc0-a7b6-4607-b879-9bfb9f318742" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.251265 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" podUID="96fb0fc0-a7b6-4607-b879-9bfb9f318742" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.626687 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.626905 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-49fh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-6dln8_openstack-operators(17061bf9-682c-4294-950a-18a5c5a00577): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:02 crc kubenswrapper[4975]: E0122 10:53:02.628110 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" podUID="17061bf9-682c-4294-950a-18a5c5a00577" Jan 22 10:53:03 crc kubenswrapper[4975]: E0122 10:53:03.263526 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" podUID="17061bf9-682c-4294-950a-18a5c5a00577" Jan 22 10:53:06 crc kubenswrapper[4975]: E0122 10:53:06.696571 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 22 10:53:06 crc kubenswrapper[4975]: E0122 10:53:06.697004 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztxcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-5w2lh_openstack-operators(f1fa565f-67df-4a8f-9f10-33397d530bbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:06 crc kubenswrapper[4975]: E0122 10:53:06.698228 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" podUID="f1fa565f-67df-4a8f-9f10-33397d530bbb" Jan 22 10:53:07 crc kubenswrapper[4975]: E0122 10:53:07.297481 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" podUID="f1fa565f-67df-4a8f-9f10-33397d530bbb" Jan 22 10:53:07 crc kubenswrapper[4975]: I0122 10:53:07.345553 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6"] Jan 22 10:53:07 crc kubenswrapper[4975]: E0122 10:53:07.736756 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:53:07 crc kubenswrapper[4975]: E0122 10:53:07.737291 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:53:07 crc kubenswrapper[4975]: E0122 10:53:07.737606 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:53:07 crc kubenswrapper[4975]: E0122 10:53:07.737632 4975 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-vskbv" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="registry-server" Jan 22 10:53:09 crc kubenswrapper[4975]: E0122 10:53:09.368557 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 10:53:09 crc kubenswrapper[4975]: E0122 10:53:09.369096 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-859js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-jk546_openstack-operators(f1fc9c22-ce13-4bde-8901-08225613757a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:09 crc kubenswrapper[4975]: E0122 10:53:09.370396 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" podUID="f1fc9c22-ce13-4bde-8901-08225613757a" Jan 22 10:53:09 crc kubenswrapper[4975]: E0122 10:53:09.819985 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 22 10:53:09 crc kubenswrapper[4975]: E0122 10:53:09.820164 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t26sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-4jt9j_openstack-operators(3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:53:09 crc kubenswrapper[4975]: E0122 10:53:09.821611 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" podUID="3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.859491 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.895245 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-catalog-content\") pod \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.895355 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-utilities\") pod \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.895406 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz22p\" (UniqueName: \"kubernetes.io/projected/a490f21d-ca26-48ea-88c7-c3b8ff21995e-kube-api-access-jz22p\") pod \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\" (UID: \"a490f21d-ca26-48ea-88c7-c3b8ff21995e\") " Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.896658 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-utilities" (OuterVolumeSpecName: "utilities") pod "a490f21d-ca26-48ea-88c7-c3b8ff21995e" (UID: "a490f21d-ca26-48ea-88c7-c3b8ff21995e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.896966 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.915505 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a490f21d-ca26-48ea-88c7-c3b8ff21995e-kube-api-access-jz22p" (OuterVolumeSpecName: "kube-api-access-jz22p") pod "a490f21d-ca26-48ea-88c7-c3b8ff21995e" (UID: "a490f21d-ca26-48ea-88c7-c3b8ff21995e"). InnerVolumeSpecName "kube-api-access-jz22p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.956509 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a490f21d-ca26-48ea-88c7-c3b8ff21995e" (UID: "a490f21d-ca26-48ea-88c7-c3b8ff21995e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.998264 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz22p\" (UniqueName: \"kubernetes.io/projected/a490f21d-ca26-48ea-88c7-c3b8ff21995e-kube-api-access-jz22p\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:09 crc kubenswrapper[4975]: I0122 10:53:09.998314 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a490f21d-ca26-48ea-88c7-c3b8ff21995e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.314758 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vskbv" event={"ID":"a490f21d-ca26-48ea-88c7-c3b8ff21995e","Type":"ContainerDied","Data":"abd8c378073cfed7022c42468a70584c0cb593669e1dc7ba82bfed90165df13d"} Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.314798 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vskbv" Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.316002 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" event={"ID":"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de","Type":"ContainerStarted","Data":"50d5671a6335e4af6aeec28688ebff743e668fd9074c405c04acae109d434443"} Jan 22 10:53:10 crc kubenswrapper[4975]: E0122 10:53:10.317319 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" podUID="f1fc9c22-ce13-4bde-8901-08225613757a" Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.340455 4975 scope.go:117] "RemoveContainer" containerID="ed3716d893b0dcd43b6a71a3e28854909c47e7f4d7e08a57cff8c08a5edd997b" Jan 22 10:53:10 crc kubenswrapper[4975]: E0122 10:53:10.340520 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" podUID="3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb" Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.427341 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vskbv"] Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.433999 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vskbv"] Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.483459 4975 scope.go:117] "RemoveContainer" containerID="26c0eb1c27c5809b4b743147ef4710f2d6fc7d0b4a34f2d19c0d72cbcaa2b8d4" Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.524529 4975 scope.go:117] "RemoveContainer" containerID="1bd0cc8c960ea514d070992819844b00d9e158704e62c1d5097ea692f0701e8e" Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.801933 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzlh5"] Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.830095 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q"] Jan 22 10:53:10 crc kubenswrapper[4975]: W0122 10:53:10.846588 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7daf3043_6acd_4038_b576_f626c6ddcb52.slice/crio-4ba7a621207deafe5cd916f86f7b44fa5d8c44ca01852c94cb895a70c5f89521 WatchSource:0}: Error finding container 4ba7a621207deafe5cd916f86f7b44fa5d8c44ca01852c94cb895a70c5f89521: Status 404 returned error can't find the container with id 4ba7a621207deafe5cd916f86f7b44fa5d8c44ca01852c94cb895a70c5f89521 Jan 22 10:53:10 crc kubenswrapper[4975]: I0122 10:53:10.946943 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7"] Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.323911 4975 generic.go:334] "Generic (PLEG): container finished" podID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerID="29d9f6f9de3092ee540a7749df276a8084433d0ee8ceb6cde5035eee7dbe7b97" exitCode=0 Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.324047 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzlh5" event={"ID":"7daf3043-6acd-4038-b576-f626c6ddcb52","Type":"ContainerDied","Data":"29d9f6f9de3092ee540a7749df276a8084433d0ee8ceb6cde5035eee7dbe7b97"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.324155 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzlh5" event={"ID":"7daf3043-6acd-4038-b576-f626c6ddcb52","Type":"ContainerStarted","Data":"4ba7a621207deafe5cd916f86f7b44fa5d8c44ca01852c94cb895a70c5f89521"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.328088 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" event={"ID":"c6673acf-6a08-449e-b67a-e1245dbe2206","Type":"ContainerStarted","Data":"40a18133094569036d3183c1a43fbd2d3549a142e38ebcba142d6b5c2b67b52a"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.328755 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.331197 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" event={"ID":"aa6e0f83-d561-4537-934c-54373200aa1f","Type":"ContainerStarted","Data":"8037d6648e4b47cdf874d90d349cd83339379866879e3d1601baf7f498700da1"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.331640 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.334199 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"96a4c784d660fbbfb3a8a9514335df4fa2fd66b35e30bd92a673d8e26178170f"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.336820 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" event={"ID":"177b52ae-55bc-40d1-badb-253927400560","Type":"ContainerStarted","Data":"bcb5d4af30657826f1670c829e7f360713a95910d221ee917f9f44db8e38100f"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.337024 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.339749 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" event={"ID":"9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62","Type":"ContainerStarted","Data":"d6f3905d9774ee3d7891b864f58ac46c54d3fce7d587e1c05f6f6c057a773d32"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.339871 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.348594 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" event={"ID":"a600298c-821b-47b5-9018-8d7ef15267c8","Type":"ContainerStarted","Data":"a70c9559c15b7e3bcf31b35ca11326991055cf6483ca3ef17b3cb683b345298b"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.348800 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.350591 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" event={"ID":"dc8e73d1-2925-43ef-a854-eb93ead085d5","Type":"ContainerStarted","Data":"e84862da337c60ad9bfff18a6708a3501d33f064fc44e08c3ce1b51223cc233c"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.352342 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" event={"ID":"d7834e51-fb9e-4f97-a385-dd98869ece3b","Type":"ContainerStarted","Data":"557c78d7c201f24639260817ef8a1dd6d4324a5e70cf93b151f149c65881984e"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.357359 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" event={"ID":"17acd27c-28fa-4f87-a3e3-c0c63adeeb9a","Type":"ContainerStarted","Data":"33d343c5d26b38965114fc086ad129fd3cfe077176b39d7ebaebd54b4483ad99"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.357864 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.359916 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" podStartSLOduration=6.369718674 podStartE2EDuration="30.359901104s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.693255479 +0000 UTC m=+955.351291589" lastFinishedPulling="2026-01-22 10:53:06.683437909 +0000 UTC m=+979.341474019" observedRunningTime="2026-01-22 10:53:11.355941377 +0000 UTC m=+984.013977487" watchObservedRunningTime="2026-01-22 10:53:11.359901104 +0000 UTC m=+984.017937214" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.361909 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" event={"ID":"962350e2-bd43-433c-af57-8094db573873","Type":"ContainerStarted","Data":"bc3acaba6417b1ea4179a9d88e71ab9199fc6e18520cdd905961433e9b81f828"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.363350 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.368745 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" event={"ID":"88fd0de4-4508-450b-97b4-3206c50fb0a2","Type":"ContainerStarted","Data":"6ca61bb719845432db792be4ca96ff6f6f8ae3575e4f387ee67360147169a0b6"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.369314 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.370537 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" event={"ID":"e8365e14-67ba-495f-a996-a2b63c7a391e","Type":"ContainerStarted","Data":"6cd2df14287169463de4979c359d3191af0495c29590fbed828605a1729fbdec"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.370923 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.373666 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" event={"ID":"2203164e-ced5-46bd-85c1-92eb94c7fb9f","Type":"ContainerStarted","Data":"f188c4d39489de734ca90d5727e30ef6c32b39dfb828cf84e28932bbb4b4db6e"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.373696 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" event={"ID":"2203164e-ced5-46bd-85c1-92eb94c7fb9f","Type":"ContainerStarted","Data":"0196eae2bb35e00efb13f45b2b42e00148098e410acd79659a0bdd59475df654"} Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.374182 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.416022 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" podStartSLOduration=7.46883312 podStartE2EDuration="31.416006152s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.737479516 +0000 UTC m=+955.395515626" lastFinishedPulling="2026-01-22 10:53:06.684652518 +0000 UTC m=+979.342688658" observedRunningTime="2026-01-22 10:53:11.385721145 +0000 UTC m=+984.043757265" watchObservedRunningTime="2026-01-22 10:53:11.416006152 +0000 UTC m=+984.074042262" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.460025 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" podStartSLOduration=3.027307506 podStartE2EDuration="30.46000138s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.109936571 +0000 UTC m=+955.767972681" lastFinishedPulling="2026-01-22 10:53:10.542630445 +0000 UTC m=+983.200666555" observedRunningTime="2026-01-22 10:53:11.433582789 +0000 UTC m=+984.091618899" watchObservedRunningTime="2026-01-22 10:53:11.46000138 +0000 UTC m=+984.118037490" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.466596 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" podStartSLOduration=3.760181031 podStartE2EDuration="30.466575624s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.132012358 +0000 UTC m=+955.790048468" lastFinishedPulling="2026-01-22 10:53:09.838406951 +0000 UTC m=+982.496443061" observedRunningTime="2026-01-22 10:53:11.455482583 +0000 UTC m=+984.113518693" watchObservedRunningTime="2026-01-22 10:53:11.466575624 +0000 UTC m=+984.124611734" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.516603 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" podStartSLOduration=4.232920683 podStartE2EDuration="31.516584057s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.131944256 +0000 UTC m=+955.789980366" lastFinishedPulling="2026-01-22 10:53:10.41560763 +0000 UTC m=+983.073643740" observedRunningTime="2026-01-22 10:53:11.487718729 +0000 UTC m=+984.145754839" watchObservedRunningTime="2026-01-22 10:53:11.516584057 +0000 UTC m=+984.174620167" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.520457 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" podStartSLOduration=4.334500946 podStartE2EDuration="31.520447044s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.131897305 +0000 UTC m=+955.789933415" lastFinishedPulling="2026-01-22 10:53:10.317843403 +0000 UTC m=+982.975879513" observedRunningTime="2026-01-22 10:53:11.504835476 +0000 UTC m=+984.162871576" watchObservedRunningTime="2026-01-22 10:53:11.520447044 +0000 UTC m=+984.178483154" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.550159 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" podStartSLOduration=6.975027562 podStartE2EDuration="30.550143702s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.109474828 +0000 UTC m=+955.767510928" lastFinishedPulling="2026-01-22 10:53:06.684590958 +0000 UTC m=+979.342627068" observedRunningTime="2026-01-22 10:53:11.527697969 +0000 UTC m=+984.185734079" watchObservedRunningTime="2026-01-22 10:53:11.550143702 +0000 UTC m=+984.208179812" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.557323 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" podStartSLOduration=4.321993759 podStartE2EDuration="31.557308587s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.132366038 +0000 UTC m=+955.790402148" lastFinishedPulling="2026-01-22 10:53:10.367680876 +0000 UTC m=+983.025716976" observedRunningTime="2026-01-22 10:53:11.546610395 +0000 UTC m=+984.204646495" watchObservedRunningTime="2026-01-22 10:53:11.557308587 +0000 UTC m=+984.215344697" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.571363 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kqh4f" podStartSLOduration=3.173681213 podStartE2EDuration="30.571348477s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.139449655 +0000 UTC m=+955.797485765" lastFinishedPulling="2026-01-22 10:53:10.537116899 +0000 UTC m=+983.195153029" observedRunningTime="2026-01-22 10:53:11.568324939 +0000 UTC m=+984.226361039" watchObservedRunningTime="2026-01-22 10:53:11.571348477 +0000 UTC m=+984.229384587" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.646490 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" podStartSLOduration=30.64647512 podStartE2EDuration="30.64647512s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:53:11.644637582 +0000 UTC m=+984.302673692" watchObservedRunningTime="2026-01-22 10:53:11.64647512 +0000 UTC m=+984.304511230" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.650305 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" podStartSLOduration=3.314914402 podStartE2EDuration="30.650295287s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.157744707 +0000 UTC m=+955.815780817" lastFinishedPulling="2026-01-22 10:53:10.493125592 +0000 UTC m=+983.151161702" observedRunningTime="2026-01-22 10:53:11.602606523 +0000 UTC m=+984.260642633" watchObservedRunningTime="2026-01-22 10:53:11.650295287 +0000 UTC m=+984.308331397" Jan 22 10:53:11 crc kubenswrapper[4975]: I0122 10:53:11.778702 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" path="/var/lib/kubelet/pods/a490f21d-ca26-48ea-88c7-c3b8ff21995e/volumes" Jan 22 10:53:13 crc kubenswrapper[4975]: I0122 10:53:13.392176 4975 generic.go:334] "Generic (PLEG): container finished" podID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerID="b4eef09717433a663c64f54d8556d64cec86459326c62328405ef21c9e08dc72" exitCode=0 Jan 22 10:53:13 crc kubenswrapper[4975]: I0122 10:53:13.392237 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzlh5" event={"ID":"7daf3043-6acd-4038-b576-f626c6ddcb52","Type":"ContainerDied","Data":"b4eef09717433a663c64f54d8556d64cec86459326c62328405ef21c9e08dc72"} Jan 22 10:53:17 crc kubenswrapper[4975]: I0122 10:53:17.577370 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-fcdd7b6c4-lq88q" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.454657 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzlh5" event={"ID":"7daf3043-6acd-4038-b576-f626c6ddcb52","Type":"ContainerStarted","Data":"c58bc6aef2c5cf20c7a9825a1aaf5b435fcaf753e023b263c94f0de91c31c666"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.457152 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" event={"ID":"ea739801-0b35-4cd3-a198-a58e93059f88","Type":"ContainerStarted","Data":"f50e80016ee8edb6126dc7bb0030cfa7549b6f8b29d318011ceda795bf98f113"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.457546 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.458779 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" event={"ID":"1cb3165d-a089-4c3d-9d70-73145248cd5c","Type":"ContainerStarted","Data":"fbee0ecbe2eb4ec75e57a2b288b7abd1d14da6f1470bd9a608260b2ffd2c7623"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.458914 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.460055 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" event={"ID":"96fb0fc0-a7b6-4607-b879-9bfb9f318742","Type":"ContainerStarted","Data":"74fb42ad0e30fa6024fcafc34ea58ec5940057acff2f543c2418d3e96b4b3f27"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.460254 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.463944 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" event={"ID":"9b23e2d7-2430-4770-bb64-b2d8a14c2f7c","Type":"ContainerStarted","Data":"836bec7c8c51a54b50893f7ebbf2657ab3500752933dba25408cc5d6b3c42e9d"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.464098 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.470435 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" event={"ID":"834b49cf-0565-4e20-b598-7d01ea144148","Type":"ContainerStarted","Data":"7ec273704d74b36f12f965e273dc19d97bf5624f1bd48e0860d07ff4c21c4f5f"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.470637 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.474865 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" event={"ID":"dc8e73d1-2925-43ef-a854-eb93ead085d5","Type":"ContainerStarted","Data":"8487e0bbc76f0a287454c351994efab8891bdca29010231f7952256625147c22"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.475060 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.479256 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" event={"ID":"ff361e81-48c6-4926-84d9-fcca6fba71f6","Type":"ContainerStarted","Data":"2ba5204e2d2b1325fe4173f8e12899824afbc16eabfcef7deaa76b907cdab891"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.479460 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.481239 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" event={"ID":"fe34d1a8-dae0-4b05-b51f-9f577a2ff8de","Type":"ContainerStarted","Data":"4d2ccd9ee25ad70e1c556096005845b05cd9e3050d00bde377af4986cb3d9085"} Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.481378 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.494543 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hzlh5" podStartSLOduration=25.175479345 podStartE2EDuration="31.494525772s" podCreationTimestamp="2026-01-22 10:52:47 +0000 UTC" firstStartedPulling="2026-01-22 10:53:11.327181178 +0000 UTC m=+983.985217288" lastFinishedPulling="2026-01-22 10:53:17.646227595 +0000 UTC m=+990.304263715" observedRunningTime="2026-01-22 10:53:18.493745502 +0000 UTC m=+991.151781612" watchObservedRunningTime="2026-01-22 10:53:18.494525772 +0000 UTC m=+991.152561882" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.535688 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" podStartSLOduration=31.025324397 podStartE2EDuration="37.535671971s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:53:11.053805593 +0000 UTC m=+983.711841703" lastFinishedPulling="2026-01-22 10:53:17.564153137 +0000 UTC m=+990.222189277" observedRunningTime="2026-01-22 10:53:18.529864406 +0000 UTC m=+991.187900516" watchObservedRunningTime="2026-01-22 10:53:18.535671971 +0000 UTC m=+991.193708081" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.543663 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" podStartSLOduration=2.992603268 podStartE2EDuration="37.543648045s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.081570768 +0000 UTC m=+955.739606878" lastFinishedPulling="2026-01-22 10:53:17.632615535 +0000 UTC m=+990.290651655" observedRunningTime="2026-01-22 10:53:18.541819326 +0000 UTC m=+991.199855436" watchObservedRunningTime="2026-01-22 10:53:18.543648045 +0000 UTC m=+991.201684145" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.581669 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" podStartSLOduration=30.850422974 podStartE2EDuration="38.581650676s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:53:09.833034585 +0000 UTC m=+982.491070715" lastFinishedPulling="2026-01-22 10:53:17.564262307 +0000 UTC m=+990.222298417" observedRunningTime="2026-01-22 10:53:18.561111912 +0000 UTC m=+991.219148022" watchObservedRunningTime="2026-01-22 10:53:18.581650676 +0000 UTC m=+991.239686776" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.582184 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" podStartSLOduration=3.122534776 podStartE2EDuration="38.582179116s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.173090265 +0000 UTC m=+954.831126375" lastFinishedPulling="2026-01-22 10:53:17.632734595 +0000 UTC m=+990.290770715" observedRunningTime="2026-01-22 10:53:18.579257998 +0000 UTC m=+991.237294108" watchObservedRunningTime="2026-01-22 10:53:18.582179116 +0000 UTC m=+991.240215226" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.609108 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" podStartSLOduration=3.739441226 podStartE2EDuration="38.609090876s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.732194148 +0000 UTC m=+955.390230258" lastFinishedPulling="2026-01-22 10:53:17.601843788 +0000 UTC m=+990.259879908" observedRunningTime="2026-01-22 10:53:18.607144867 +0000 UTC m=+991.265180977" watchObservedRunningTime="2026-01-22 10:53:18.609090876 +0000 UTC m=+991.267126986" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.630490 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" podStartSLOduration=3.747119413 podStartE2EDuration="38.63047413s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.69437228 +0000 UTC m=+955.352408390" lastFinishedPulling="2026-01-22 10:53:17.577726997 +0000 UTC m=+990.235763107" observedRunningTime="2026-01-22 10:53:18.627998832 +0000 UTC m=+991.286034932" watchObservedRunningTime="2026-01-22 10:53:18.63047413 +0000 UTC m=+991.288510240" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.644852 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" podStartSLOduration=3.09936743 podStartE2EDuration="37.64483187s" podCreationTimestamp="2026-01-22 10:52:41 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.104663813 +0000 UTC m=+955.762699923" lastFinishedPulling="2026-01-22 10:53:17.650128243 +0000 UTC m=+990.308164363" observedRunningTime="2026-01-22 10:53:18.641102432 +0000 UTC m=+991.299138532" watchObservedRunningTime="2026-01-22 10:53:18.64483187 +0000 UTC m=+991.302867980" Jan 22 10:53:18 crc kubenswrapper[4975]: I0122 10:53:18.662393 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" podStartSLOduration=4.121024938 podStartE2EDuration="38.662312916s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.109705414 +0000 UTC m=+955.767741524" lastFinishedPulling="2026-01-22 10:53:17.650993382 +0000 UTC m=+990.309029502" observedRunningTime="2026-01-22 10:53:18.660278738 +0000 UTC m=+991.318314848" watchObservedRunningTime="2026-01-22 10:53:18.662312916 +0000 UTC m=+991.320349046" Jan 22 10:53:20 crc kubenswrapper[4975]: I0122 10:53:20.497557 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" event={"ID":"17061bf9-682c-4294-950a-18a5c5a00577","Type":"ContainerStarted","Data":"1c0d4111f73568dada7522fc21b5a2f51660cf7e15282e7c6013df0007ffa97d"} Jan 22 10:53:20 crc kubenswrapper[4975]: I0122 10:53:20.498317 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:53:20 crc kubenswrapper[4975]: I0122 10:53:20.513437 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" podStartSLOduration=3.040137732 podStartE2EDuration="40.513401663s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.195087473 +0000 UTC m=+954.853123573" lastFinishedPulling="2026-01-22 10:53:19.668351384 +0000 UTC m=+992.326387504" observedRunningTime="2026-01-22 10:53:20.509465665 +0000 UTC m=+993.167501775" watchObservedRunningTime="2026-01-22 10:53:20.513401663 +0000 UTC m=+993.171437773" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.426820 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dm74c" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.577580 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kfvm6" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.630733 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-qcx4k" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.698000 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-27zdj" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.751720 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-s64bc" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.798117 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-pspwb" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.884683 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-l8cz4" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.917473 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-fpxsm" Jan 22 10:53:21 crc kubenswrapper[4975]: I0122 10:53:21.946467 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ddrzg" Jan 22 10:53:27 crc kubenswrapper[4975]: I0122 10:53:27.278037 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-s5qw6" Jan 22 10:53:27 crc kubenswrapper[4975]: I0122 10:53:27.616319 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7" Jan 22 10:53:28 crc kubenswrapper[4975]: I0122 10:53:28.163056 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:53:28 crc kubenswrapper[4975]: I0122 10:53:28.163116 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:53:28 crc kubenswrapper[4975]: I0122 10:53:28.240827 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:53:28 crc kubenswrapper[4975]: I0122 10:53:28.624350 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:53:28 crc kubenswrapper[4975]: I0122 10:53:28.673634 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzlh5"] Jan 22 10:53:30 crc kubenswrapper[4975]: I0122 10:53:30.574679 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hzlh5" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="registry-server" containerID="cri-o://c58bc6aef2c5cf20c7a9825a1aaf5b435fcaf753e023b263c94f0de91c31c666" gracePeriod=2 Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.176067 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wsw9j" Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.287212 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6dln8" Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.560226 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hgw2h" Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.599625 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-48bd6" Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.646483 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g49ht" Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.818919 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-wmzml" Jan 22 10:53:31 crc kubenswrapper[4975]: I0122 10:53:31.851053 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x2kk6" Jan 22 10:53:32 crc kubenswrapper[4975]: I0122 10:53:32.589138 4975 generic.go:334] "Generic (PLEG): container finished" podID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerID="c58bc6aef2c5cf20c7a9825a1aaf5b435fcaf753e023b263c94f0de91c31c666" exitCode=0 Jan 22 10:53:32 crc kubenswrapper[4975]: I0122 10:53:32.589186 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzlh5" event={"ID":"7daf3043-6acd-4038-b576-f626c6ddcb52","Type":"ContainerDied","Data":"c58bc6aef2c5cf20c7a9825a1aaf5b435fcaf753e023b263c94f0de91c31c666"} Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.134480 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.323819 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4pl4\" (UniqueName: \"kubernetes.io/projected/7daf3043-6acd-4038-b576-f626c6ddcb52-kube-api-access-b4pl4\") pod \"7daf3043-6acd-4038-b576-f626c6ddcb52\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.324153 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-catalog-content\") pod \"7daf3043-6acd-4038-b576-f626c6ddcb52\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.324201 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-utilities\") pod \"7daf3043-6acd-4038-b576-f626c6ddcb52\" (UID: \"7daf3043-6acd-4038-b576-f626c6ddcb52\") " Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.325213 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-utilities" (OuterVolumeSpecName: "utilities") pod "7daf3043-6acd-4038-b576-f626c6ddcb52" (UID: "7daf3043-6acd-4038-b576-f626c6ddcb52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.332484 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7daf3043-6acd-4038-b576-f626c6ddcb52-kube-api-access-b4pl4" (OuterVolumeSpecName: "kube-api-access-b4pl4") pod "7daf3043-6acd-4038-b576-f626c6ddcb52" (UID: "7daf3043-6acd-4038-b576-f626c6ddcb52"). InnerVolumeSpecName "kube-api-access-b4pl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.355657 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7daf3043-6acd-4038-b576-f626c6ddcb52" (UID: "7daf3043-6acd-4038-b576-f626c6ddcb52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.425548 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4pl4\" (UniqueName: \"kubernetes.io/projected/7daf3043-6acd-4038-b576-f626c6ddcb52-kube-api-access-b4pl4\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.425586 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.425596 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7daf3043-6acd-4038-b576-f626c6ddcb52-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.628604 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" event={"ID":"f1fc9c22-ce13-4bde-8901-08225613757a","Type":"ContainerStarted","Data":"d58f05becd6ac7ecfea29a64a1c88d1c4413278e5d2081e1651930c37c74e089"} Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.629016 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.630266 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" event={"ID":"f1fa565f-67df-4a8f-9f10-33397d530bbb","Type":"ContainerStarted","Data":"523233c6dc0c9ef4c07771652ba030c55a8be7e1850cb5f4fd941bb7f2ac878c"} Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.630421 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.632417 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzlh5" event={"ID":"7daf3043-6acd-4038-b576-f626c6ddcb52","Type":"ContainerDied","Data":"4ba7a621207deafe5cd916f86f7b44fa5d8c44ca01852c94cb895a70c5f89521"} Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.632466 4975 scope.go:117] "RemoveContainer" containerID="c58bc6aef2c5cf20c7a9825a1aaf5b435fcaf753e023b263c94f0de91c31c666" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.632573 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzlh5" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.634386 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" event={"ID":"3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb","Type":"ContainerStarted","Data":"183a2e89361ea07da60f3a383cdccc1c25a63884d66dee20f9a2c3f209790dad"} Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.635015 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.648923 4975 scope.go:117] "RemoveContainer" containerID="b4eef09717433a663c64f54d8556d64cec86459326c62328405ef21c9e08dc72" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.664258 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" podStartSLOduration=3.525368889 podStartE2EDuration="57.664239334s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.727810315 +0000 UTC m=+955.385846425" lastFinishedPulling="2026-01-22 10:53:36.86668076 +0000 UTC m=+1009.524716870" observedRunningTime="2026-01-22 10:53:37.646584687 +0000 UTC m=+1010.304620797" watchObservedRunningTime="2026-01-22 10:53:37.664239334 +0000 UTC m=+1010.322275444" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.665747 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzlh5"] Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.672450 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzlh5"] Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.681781 4975 scope.go:117] "RemoveContainer" containerID="29d9f6f9de3092ee540a7749df276a8084433d0ee8ceb6cde5035eee7dbe7b97" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.693181 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" podStartSLOduration=3.925275373 podStartE2EDuration="57.693164032s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:43.104396036 +0000 UTC m=+955.762432146" lastFinishedPulling="2026-01-22 10:53:36.872284695 +0000 UTC m=+1009.530320805" observedRunningTime="2026-01-22 10:53:37.688938806 +0000 UTC m=+1010.346974916" watchObservedRunningTime="2026-01-22 10:53:37.693164032 +0000 UTC m=+1010.351200142" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.709139 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" podStartSLOduration=3.34664429 podStartE2EDuration="57.70912576s" podCreationTimestamp="2026-01-22 10:52:40 +0000 UTC" firstStartedPulling="2026-01-22 10:52:42.507204997 +0000 UTC m=+955.165241107" lastFinishedPulling="2026-01-22 10:53:36.869686467 +0000 UTC m=+1009.527722577" observedRunningTime="2026-01-22 10:53:37.704533474 +0000 UTC m=+1010.362569584" watchObservedRunningTime="2026-01-22 10:53:37.70912576 +0000 UTC m=+1010.367161870" Jan 22 10:53:37 crc kubenswrapper[4975]: I0122 10:53:37.762279 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" path="/var/lib/kubelet/pods/7daf3043-6acd-4038-b576-f626c6ddcb52/volumes" Jan 22 10:53:51 crc kubenswrapper[4975]: I0122 10:53:51.213280 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w2lh" Jan 22 10:53:51 crc kubenswrapper[4975]: I0122 10:53:51.493182 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jk546" Jan 22 10:53:51 crc kubenswrapper[4975]: I0122 10:53:51.707802 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4jt9j" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.515528 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s9bqp"] Jan 22 10:54:08 crc kubenswrapper[4975]: E0122 10:54:08.518071 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="extract-content" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518098 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="extract-content" Jan 22 10:54:08 crc kubenswrapper[4975]: E0122 10:54:08.518113 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="registry-server" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518125 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="registry-server" Jan 22 10:54:08 crc kubenswrapper[4975]: E0122 10:54:08.518147 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="registry-server" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518160 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="registry-server" Jan 22 10:54:08 crc kubenswrapper[4975]: E0122 10:54:08.518184 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="extract-utilities" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518196 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="extract-utilities" Jan 22 10:54:08 crc kubenswrapper[4975]: E0122 10:54:08.518219 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="extract-content" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518230 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="extract-content" Jan 22 10:54:08 crc kubenswrapper[4975]: E0122 10:54:08.518250 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="extract-utilities" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518262 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="extract-utilities" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518565 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="7daf3043-6acd-4038-b576-f626c6ddcb52" containerName="registry-server" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.518591 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a490f21d-ca26-48ea-88c7-c3b8ff21995e" containerName="registry-server" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.519711 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.525003 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.525230 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.525232 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s9bqp"] Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.525430 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.525656 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-k6kzp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.574051 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gjvh"] Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.583071 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.586147 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.597677 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gjvh"] Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.624900 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96431602-4d51-4687-83da-151eb56c4eb3-config\") pod \"dnsmasq-dns-675f4bcbfc-s9bqp\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.624980 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsgd\" (UniqueName: \"kubernetes.io/projected/96431602-4d51-4687-83da-151eb56c4eb3-kube-api-access-bfsgd\") pod \"dnsmasq-dns-675f4bcbfc-s9bqp\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.726252 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsgd\" (UniqueName: \"kubernetes.io/projected/96431602-4d51-4687-83da-151eb56c4eb3-kube-api-access-bfsgd\") pod \"dnsmasq-dns-675f4bcbfc-s9bqp\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.726316 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhdh\" (UniqueName: \"kubernetes.io/projected/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-kube-api-access-bhhdh\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.726387 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96431602-4d51-4687-83da-151eb56c4eb3-config\") pod \"dnsmasq-dns-675f4bcbfc-s9bqp\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.726417 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.726476 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-config\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.727302 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96431602-4d51-4687-83da-151eb56c4eb3-config\") pod \"dnsmasq-dns-675f4bcbfc-s9bqp\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.748715 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfsgd\" (UniqueName: \"kubernetes.io/projected/96431602-4d51-4687-83da-151eb56c4eb3-kube-api-access-bfsgd\") pod \"dnsmasq-dns-675f4bcbfc-s9bqp\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.827345 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhdh\" (UniqueName: \"kubernetes.io/projected/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-kube-api-access-bhhdh\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.827445 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.827561 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-config\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.828294 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.828435 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-config\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.845031 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhdh\" (UniqueName: \"kubernetes.io/projected/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-kube-api-access-bhhdh\") pod \"dnsmasq-dns-78dd6ddcc-9gjvh\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.851023 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:08 crc kubenswrapper[4975]: I0122 10:54:08.900250 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:09 crc kubenswrapper[4975]: I0122 10:54:09.317820 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s9bqp"] Jan 22 10:54:09 crc kubenswrapper[4975]: W0122 10:54:09.325550 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96431602_4d51_4687_83da_151eb56c4eb3.slice/crio-3a4a75ca484e9c9cbfbe8690190be6ddc94955c71e8fd9f06806f93ee342ad21 WatchSource:0}: Error finding container 3a4a75ca484e9c9cbfbe8690190be6ddc94955c71e8fd9f06806f93ee342ad21: Status 404 returned error can't find the container with id 3a4a75ca484e9c9cbfbe8690190be6ddc94955c71e8fd9f06806f93ee342ad21 Jan 22 10:54:09 crc kubenswrapper[4975]: I0122 10:54:09.327701 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:54:09 crc kubenswrapper[4975]: I0122 10:54:09.378754 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gjvh"] Jan 22 10:54:09 crc kubenswrapper[4975]: W0122 10:54:09.382446 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdb1b7d4_b919_40e3_a9c3_a9f3d7ffb3f9.slice/crio-b6dfe8b2547fce927ca7bad57c2a1a193202326b3a4e3d0239cc8bcadfe6ca68 WatchSource:0}: Error finding container b6dfe8b2547fce927ca7bad57c2a1a193202326b3a4e3d0239cc8bcadfe6ca68: Status 404 returned error can't find the container with id b6dfe8b2547fce927ca7bad57c2a1a193202326b3a4e3d0239cc8bcadfe6ca68 Jan 22 10:54:09 crc kubenswrapper[4975]: I0122 10:54:09.901863 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" event={"ID":"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9","Type":"ContainerStarted","Data":"b6dfe8b2547fce927ca7bad57c2a1a193202326b3a4e3d0239cc8bcadfe6ca68"} Jan 22 10:54:09 crc kubenswrapper[4975]: I0122 10:54:09.903131 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" event={"ID":"96431602-4d51-4687-83da-151eb56c4eb3","Type":"ContainerStarted","Data":"3a4a75ca484e9c9cbfbe8690190be6ddc94955c71e8fd9f06806f93ee342ad21"} Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.212723 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s9bqp"] Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.242545 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qzlnj"] Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.243719 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.253934 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qzlnj"] Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.366150 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbrs\" (UniqueName: \"kubernetes.io/projected/44c484ff-07fe-4a1b-a9de-ad08897f468f-kube-api-access-dbbrs\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.366223 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.366244 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-config\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.467722 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbbrs\" (UniqueName: \"kubernetes.io/projected/44c484ff-07fe-4a1b-a9de-ad08897f468f-kube-api-access-dbbrs\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.467782 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.467801 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-config\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.468710 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-config\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.468853 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.491159 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gjvh"] Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.493247 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbbrs\" (UniqueName: \"kubernetes.io/projected/44c484ff-07fe-4a1b-a9de-ad08897f468f-kube-api-access-dbbrs\") pod \"dnsmasq-dns-666b6646f7-qzlnj\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.520344 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pb6s2"] Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.523699 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.551090 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pb6s2"] Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.573249 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.670082 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-config\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.670126 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.670151 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dw2\" (UniqueName: \"kubernetes.io/projected/18481aeb-1cfa-4bd1-af3a-d4daea62b647-kube-api-access-r4dw2\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.772158 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-config\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.772217 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.772255 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4dw2\" (UniqueName: \"kubernetes.io/projected/18481aeb-1cfa-4bd1-af3a-d4daea62b647-kube-api-access-r4dw2\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.773096 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-config\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.773228 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.801292 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4dw2\" (UniqueName: \"kubernetes.io/projected/18481aeb-1cfa-4bd1-af3a-d4daea62b647-kube-api-access-r4dw2\") pod \"dnsmasq-dns-57d769cc4f-pb6s2\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:11 crc kubenswrapper[4975]: I0122 10:54:11.851674 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.018898 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qzlnj"] Jan 22 10:54:12 crc kubenswrapper[4975]: W0122 10:54:12.021606 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c484ff_07fe_4a1b_a9de_ad08897f468f.slice/crio-9ffc12c9c77950bdaa0a8ad0387537d03f15507cd09fae8a7db8e20fa7d78b6b WatchSource:0}: Error finding container 9ffc12c9c77950bdaa0a8ad0387537d03f15507cd09fae8a7db8e20fa7d78b6b: Status 404 returned error can't find the container with id 9ffc12c9c77950bdaa0a8ad0387537d03f15507cd09fae8a7db8e20fa7d78b6b Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.257092 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pb6s2"] Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.389306 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.390485 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.392762 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.392895 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.392933 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5nhpj" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.394615 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.394911 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.395510 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.396515 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.413102 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.587936 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5svp\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-kube-api-access-s5svp\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588005 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be23c045-c637-4832-8944-ed6250b0d8b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588047 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588074 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588096 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588135 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588158 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.588184 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.591756 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be23c045-c637-4832-8944-ed6250b0d8b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.591826 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.591917 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.679768 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.680889 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.684908 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692699 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be23c045-c637-4832-8944-ed6250b0d8b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692748 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692770 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692783 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692815 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692831 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692845 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692875 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be23c045-c637-4832-8944-ed6250b0d8b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692900 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692921 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.692942 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5svp\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-kube-api-access-s5svp\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.693440 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.693723 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.701812 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.702033 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.731460 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.731858 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be23c045-c637-4832-8944-ed6250b0d8b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.732340 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.732543 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.733696 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.734051 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.734252 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.734472 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.734700 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vjxnt" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.734874 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.734510 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.735560 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be23c045-c637-4832-8944-ed6250b0d8b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.735680 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.740710 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.750930 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5svp\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-kube-api-access-s5svp\") pod \"rabbitmq-server-0\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " pod="openstack/rabbitmq-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801656 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801709 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801737 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801779 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801812 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801843 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801862 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801885 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdvs\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-kube-api-access-4tdvs\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801900 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801927 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.801943 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903177 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903243 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdvs\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-kube-api-access-4tdvs\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903263 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903306 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903323 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903361 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903386 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903410 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903452 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903486 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.903501 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.904915 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.905075 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.905827 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.906528 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.906752 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.907210 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.907400 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.907449 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.923615 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.930317 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tdvs\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-kube-api-access-4tdvs\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.933025 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" event={"ID":"44c484ff-07fe-4a1b-a9de-ad08897f468f","Type":"ContainerStarted","Data":"9ffc12c9c77950bdaa0a8ad0387537d03f15507cd09fae8a7db8e20fa7d78b6b"} Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.936294 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:12 crc kubenswrapper[4975]: I0122 10:54:12.945241 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.026979 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.126837 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.889897 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.891038 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.894266 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8xgsm" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.895259 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.895955 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.896050 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.908489 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 10:54:13 crc kubenswrapper[4975]: I0122 10:54:13.923204 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.032583 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d809d7-faf1-4033-a986-f49f903d36f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.032694 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8pl4\" (UniqueName: \"kubernetes.io/projected/f6d809d7-faf1-4033-a986-f49f903d36f1-kube-api-access-r8pl4\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.032816 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d809d7-faf1-4033-a986-f49f903d36f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.032876 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.032907 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.033065 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.033140 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.033189 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f6d809d7-faf1-4033-a986-f49f903d36f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136096 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136505 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136547 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f6d809d7-faf1-4033-a986-f49f903d36f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136616 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d809d7-faf1-4033-a986-f49f903d36f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136727 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8pl4\" (UniqueName: \"kubernetes.io/projected/f6d809d7-faf1-4033-a986-f49f903d36f1-kube-api-access-r8pl4\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136767 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d809d7-faf1-4033-a986-f49f903d36f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136788 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.136808 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.137749 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.137908 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f6d809d7-faf1-4033-a986-f49f903d36f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.139085 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.140754 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.142082 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d809d7-faf1-4033-a986-f49f903d36f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.142850 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d809d7-faf1-4033-a986-f49f903d36f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.155713 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d809d7-faf1-4033-a986-f49f903d36f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.168154 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8pl4\" (UniqueName: \"kubernetes.io/projected/f6d809d7-faf1-4033-a986-f49f903d36f1-kube-api-access-r8pl4\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.175227 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"f6d809d7-faf1-4033-a986-f49f903d36f1\") " pod="openstack/openstack-galera-0" Jan 22 10:54:14 crc kubenswrapper[4975]: I0122 10:54:14.222820 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.239186 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.240953 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.244420 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.244471 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.244751 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jwrx4" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.246648 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.256259 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356393 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621048f0-ccdc-4953-97d4-ba5286ff3345-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356717 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621048f0-ccdc-4953-97d4-ba5286ff3345-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356776 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356795 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621048f0-ccdc-4953-97d4-ba5286ff3345-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356813 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356866 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356886 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvdp\" (UniqueName: \"kubernetes.io/projected/621048f0-ccdc-4953-97d4-ba5286ff3345-kube-api-access-gpvdp\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.356916 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458480 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458528 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621048f0-ccdc-4953-97d4-ba5286ff3345-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458550 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458612 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458632 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvdp\" (UniqueName: \"kubernetes.io/projected/621048f0-ccdc-4953-97d4-ba5286ff3345-kube-api-access-gpvdp\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458660 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458682 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621048f0-ccdc-4953-97d4-ba5286ff3345-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458705 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621048f0-ccdc-4953-97d4-ba5286ff3345-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.458932 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621048f0-ccdc-4953-97d4-ba5286ff3345-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.459430 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.460119 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.460192 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.460414 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621048f0-ccdc-4953-97d4-ba5286ff3345-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.472879 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621048f0-ccdc-4953-97d4-ba5286ff3345-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.483621 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.485468 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvdp\" (UniqueName: \"kubernetes.io/projected/621048f0-ccdc-4953-97d4-ba5286ff3345-kube-api-access-gpvdp\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.485485 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621048f0-ccdc-4953-97d4-ba5286ff3345-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"621048f0-ccdc-4953-97d4-ba5286ff3345\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.577526 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.578577 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.582665 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.582953 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6qhzk" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.583082 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.599457 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.600167 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.660672 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5525f79d-14ec-480a-b69f-7c003fdc6dba-kolla-config\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.660711 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5525f79d-14ec-480a-b69f-7c003fdc6dba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.660734 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vffgf\" (UniqueName: \"kubernetes.io/projected/5525f79d-14ec-480a-b69f-7c003fdc6dba-kube-api-access-vffgf\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.660767 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5525f79d-14ec-480a-b69f-7c003fdc6dba-config-data\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.660917 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5525f79d-14ec-480a-b69f-7c003fdc6dba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.763256 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5525f79d-14ec-480a-b69f-7c003fdc6dba-kolla-config\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.763494 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5525f79d-14ec-480a-b69f-7c003fdc6dba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.763561 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vffgf\" (UniqueName: \"kubernetes.io/projected/5525f79d-14ec-480a-b69f-7c003fdc6dba-kube-api-access-vffgf\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.763626 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5525f79d-14ec-480a-b69f-7c003fdc6dba-config-data\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.763756 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5525f79d-14ec-480a-b69f-7c003fdc6dba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.764032 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5525f79d-14ec-480a-b69f-7c003fdc6dba-kolla-config\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.764907 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5525f79d-14ec-480a-b69f-7c003fdc6dba-config-data\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.768146 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5525f79d-14ec-480a-b69f-7c003fdc6dba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.768634 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5525f79d-14ec-480a-b69f-7c003fdc6dba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.782843 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vffgf\" (UniqueName: \"kubernetes.io/projected/5525f79d-14ec-480a-b69f-7c003fdc6dba-kube-api-access-vffgf\") pod \"memcached-0\" (UID: \"5525f79d-14ec-480a-b69f-7c003fdc6dba\") " pod="openstack/memcached-0" Jan 22 10:54:15 crc kubenswrapper[4975]: I0122 10:54:15.896075 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 10:54:17 crc kubenswrapper[4975]: I0122 10:54:17.808887 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:54:17 crc kubenswrapper[4975]: I0122 10:54:17.815309 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:54:17 crc kubenswrapper[4975]: I0122 10:54:17.823001 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gf47c" Jan 22 10:54:17 crc kubenswrapper[4975]: I0122 10:54:17.843197 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:54:17 crc kubenswrapper[4975]: I0122 10:54:17.902115 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7b75\" (UniqueName: \"kubernetes.io/projected/fb31a1a0-e358-436c-b4e3-9bd699096030-kube-api-access-g7b75\") pod \"kube-state-metrics-0\" (UID: \"fb31a1a0-e358-436c-b4e3-9bd699096030\") " pod="openstack/kube-state-metrics-0" Jan 22 10:54:17 crc kubenswrapper[4975]: I0122 10:54:17.975177 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" event={"ID":"18481aeb-1cfa-4bd1-af3a-d4daea62b647","Type":"ContainerStarted","Data":"b76d0ddbc107002f44fcdf6f74bb6ecdd838bc93649367e42f3353a3a53e2e9e"} Jan 22 10:54:18 crc kubenswrapper[4975]: I0122 10:54:18.003287 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7b75\" (UniqueName: \"kubernetes.io/projected/fb31a1a0-e358-436c-b4e3-9bd699096030-kube-api-access-g7b75\") pod \"kube-state-metrics-0\" (UID: \"fb31a1a0-e358-436c-b4e3-9bd699096030\") " pod="openstack/kube-state-metrics-0" Jan 22 10:54:18 crc kubenswrapper[4975]: I0122 10:54:18.022305 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7b75\" (UniqueName: \"kubernetes.io/projected/fb31a1a0-e358-436c-b4e3-9bd699096030-kube-api-access-g7b75\") pod \"kube-state-metrics-0\" (UID: \"fb31a1a0-e358-436c-b4e3-9bd699096030\") " pod="openstack/kube-state-metrics-0" Jan 22 10:54:18 crc kubenswrapper[4975]: I0122 10:54:18.180678 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:54:18 crc kubenswrapper[4975]: I0122 10:54:18.721881 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.225905 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mj5f2"] Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.228119 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.232324 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.233755 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.234366 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-d4568" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.245584 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jqvh9"] Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.247188 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.259790 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mj5f2"] Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.259834 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jqvh9"] Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354223 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwzf\" (UniqueName: \"kubernetes.io/projected/47de946e-512c-49a6-b633-c3c1640fb4bb-kube-api-access-snwzf\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354306 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-run\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354351 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-scripts\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354371 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-run-ovn\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354406 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-lib\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354422 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47de946e-512c-49a6-b633-c3c1640fb4bb-combined-ca-bundle\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354474 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-log-ovn\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354493 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/47de946e-512c-49a6-b633-c3c1640fb4bb-ovn-controller-tls-certs\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354521 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-etc-ovs\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354546 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-log\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354572 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-run\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354675 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5fg9\" (UniqueName: \"kubernetes.io/projected/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-kube-api-access-s5fg9\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.354719 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47de946e-512c-49a6-b633-c3c1640fb4bb-scripts\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456385 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-run\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456434 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-scripts\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456456 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-run-ovn\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456588 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-lib\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456613 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47de946e-512c-49a6-b633-c3c1640fb4bb-combined-ca-bundle\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456643 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-log-ovn\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456671 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/47de946e-512c-49a6-b633-c3c1640fb4bb-ovn-controller-tls-certs\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456697 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-etc-ovs\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456717 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-log\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456746 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-run\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456775 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5fg9\" (UniqueName: \"kubernetes.io/projected/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-kube-api-access-s5fg9\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456797 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47de946e-512c-49a6-b633-c3c1640fb4bb-scripts\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456832 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwzf\" (UniqueName: \"kubernetes.io/projected/47de946e-512c-49a6-b633-c3c1640fb4bb-kube-api-access-snwzf\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.456980 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-run\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.457016 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-etc-ovs\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.457025 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-log-ovn\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.457058 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-run\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.457112 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-log\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.457177 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-var-lib\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.457388 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47de946e-512c-49a6-b633-c3c1640fb4bb-var-run-ovn\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.458983 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47de946e-512c-49a6-b633-c3c1640fb4bb-scripts\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.459277 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-scripts\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.468580 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/47de946e-512c-49a6-b633-c3c1640fb4bb-ovn-controller-tls-certs\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.469062 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47de946e-512c-49a6-b633-c3c1640fb4bb-combined-ca-bundle\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.473528 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwzf\" (UniqueName: \"kubernetes.io/projected/47de946e-512c-49a6-b633-c3c1640fb4bb-kube-api-access-snwzf\") pod \"ovn-controller-mj5f2\" (UID: \"47de946e-512c-49a6-b633-c3c1640fb4bb\") " pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.473994 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5fg9\" (UniqueName: \"kubernetes.io/projected/90f8cfc5-4b43-4d89-8c42-a39306a0f1df-kube-api-access-s5fg9\") pod \"ovn-controller-ovs-jqvh9\" (UID: \"90f8cfc5-4b43-4d89-8c42-a39306a0f1df\") " pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.544869 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:21 crc kubenswrapper[4975]: I0122 10:54:21.600667 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.003308 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9","Type":"ContainerStarted","Data":"9a52afeea93102041ca27f13b0cac63859d3d6844540c93ef34ac5212bf4d73d"} Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.118854 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.120021 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.122985 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.123206 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.123384 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-56kc8" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.123529 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.123666 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.135116 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287113 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ce5bdc2-44a6-4071-8df7-736a1992fee0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287174 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287200 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce5bdc2-44a6-4071-8df7-736a1992fee0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287234 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce5bdc2-44a6-4071-8df7-736a1992fee0-config\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287250 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287283 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287302 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.287346 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfg6\" (UniqueName: \"kubernetes.io/projected/3ce5bdc2-44a6-4071-8df7-736a1992fee0-kube-api-access-hpfg6\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.299713 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388755 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ce5bdc2-44a6-4071-8df7-736a1992fee0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388819 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388856 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce5bdc2-44a6-4071-8df7-736a1992fee0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388900 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce5bdc2-44a6-4071-8df7-736a1992fee0-config\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388923 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388967 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.388997 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.389043 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfg6\" (UniqueName: \"kubernetes.io/projected/3ce5bdc2-44a6-4071-8df7-736a1992fee0-kube-api-access-hpfg6\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.389284 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.389353 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3ce5bdc2-44a6-4071-8df7-736a1992fee0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.390031 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce5bdc2-44a6-4071-8df7-736a1992fee0-config\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.390097 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce5bdc2-44a6-4071-8df7-736a1992fee0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.396636 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.397107 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.397531 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce5bdc2-44a6-4071-8df7-736a1992fee0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.404660 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfg6\" (UniqueName: \"kubernetes.io/projected/3ce5bdc2-44a6-4071-8df7-736a1992fee0-kube-api-access-hpfg6\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.413665 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3ce5bdc2-44a6-4071-8df7-736a1992fee0\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:22 crc kubenswrapper[4975]: I0122 10:54:22.452145 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.336635 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.337903 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.340601 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.340987 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.341326 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dqjpz" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.341360 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.352452 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443606 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc2k\" (UniqueName: \"kubernetes.io/projected/a50b6032-49e5-467e-9728-b4c43293dbcf-kube-api-access-7lc2k\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443679 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443707 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a50b6032-49e5-467e-9728-b4c43293dbcf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443736 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443757 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443829 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443890 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a50b6032-49e5-467e-9728-b4c43293dbcf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.443927 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b6032-49e5-467e-9728-b4c43293dbcf-config\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545500 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a50b6032-49e5-467e-9728-b4c43293dbcf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545560 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b6032-49e5-467e-9728-b4c43293dbcf-config\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545643 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc2k\" (UniqueName: \"kubernetes.io/projected/a50b6032-49e5-467e-9728-b4c43293dbcf-kube-api-access-7lc2k\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545675 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545693 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a50b6032-49e5-467e-9728-b4c43293dbcf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545715 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545728 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.545743 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.546106 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.546847 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b6032-49e5-467e-9728-b4c43293dbcf-config\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.547037 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a50b6032-49e5-467e-9728-b4c43293dbcf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.547496 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a50b6032-49e5-467e-9728-b4c43293dbcf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.550925 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.555928 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.558306 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50b6032-49e5-467e-9728-b4c43293dbcf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.566890 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc2k\" (UniqueName: \"kubernetes.io/projected/a50b6032-49e5-467e-9728-b4c43293dbcf-kube-api-access-7lc2k\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.578459 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a50b6032-49e5-467e-9728-b4c43293dbcf\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:25 crc kubenswrapper[4975]: I0122 10:54:25.671265 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:27 crc kubenswrapper[4975]: E0122 10:54:27.241550 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 10:54:27 crc kubenswrapper[4975]: E0122 10:54:27.242142 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfsgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-s9bqp_openstack(96431602-4d51-4687-83da-151eb56c4eb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:54:27 crc kubenswrapper[4975]: E0122 10:54:27.243531 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" podUID="96431602-4d51-4687-83da-151eb56c4eb3" Jan 22 10:54:27 crc kubenswrapper[4975]: E0122 10:54:27.358791 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 10:54:27 crc kubenswrapper[4975]: E0122 10:54:27.359252 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhhdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9gjvh_openstack(cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:54:27 crc kubenswrapper[4975]: E0122 10:54:27.360457 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" podUID="cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9" Jan 22 10:54:27 crc kubenswrapper[4975]: I0122 10:54:27.676887 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 10:54:27 crc kubenswrapper[4975]: W0122 10:54:27.733917 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621048f0_ccdc_4953_97d4_ba5286ff3345.slice/crio-452cd9b352bf1e58423cbe2d90b5a7635ee54f80ca1227cae6338bcb771dd76c WatchSource:0}: Error finding container 452cd9b352bf1e58423cbe2d90b5a7635ee54f80ca1227cae6338bcb771dd76c: Status 404 returned error can't find the container with id 452cd9b352bf1e58423cbe2d90b5a7635ee54f80ca1227cae6338bcb771dd76c Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.053926 4975 generic.go:334] "Generic (PLEG): container finished" podID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerID="c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf" exitCode=0 Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.053997 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" event={"ID":"44c484ff-07fe-4a1b-a9de-ad08897f468f","Type":"ContainerDied","Data":"c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf"} Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.059256 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621048f0-ccdc-4953-97d4-ba5286ff3345","Type":"ContainerStarted","Data":"452cd9b352bf1e58423cbe2d90b5a7635ee54f80ca1227cae6338bcb771dd76c"} Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.063958 4975 generic.go:334] "Generic (PLEG): container finished" podID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerID="b3bcf9b93932253af09efd34c2a2ee850039b12607c02033f8d9e1052582282d" exitCode=0 Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.064068 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" event={"ID":"18481aeb-1cfa-4bd1-af3a-d4daea62b647","Type":"ContainerDied","Data":"b3bcf9b93932253af09efd34c2a2ee850039b12607c02033f8d9e1052582282d"} Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.086537 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f6d809d7-faf1-4033-a986-f49f903d36f1","Type":"ContainerStarted","Data":"d83e73cc9b191ba8faeadd741de5168b319bfc22f019ba000ca92524125da818"} Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.114829 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:54:28 crc kubenswrapper[4975]: W0122 10:54:28.129649 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe23c045_c637_4832_8944_ed6250b0d8b3.slice/crio-e605e585eae21adbd376d5b1b58533c62b7416ab61ada02bbff1f6197654bd3c WatchSource:0}: Error finding container e605e585eae21adbd376d5b1b58533c62b7416ab61ada02bbff1f6197654bd3c: Status 404 returned error can't find the container with id e605e585eae21adbd376d5b1b58533c62b7416ab61ada02bbff1f6197654bd3c Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.149376 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.159590 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.254506 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.272150 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mj5f2"] Jan 22 10:54:28 crc kubenswrapper[4975]: I0122 10:54:28.946889 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.092943 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"be23c045-c637-4832-8944-ed6250b0d8b3","Type":"ContainerStarted","Data":"e605e585eae21adbd376d5b1b58533c62b7416ab61ada02bbff1f6197654bd3c"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.094270 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fb31a1a0-e358-436c-b4e3-9bd699096030","Type":"ContainerStarted","Data":"f47c08f36970e6857ecd5b1b2f749be259415890c1cdb75a7ac8117163c56fdd"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.096136 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" event={"ID":"18481aeb-1cfa-4bd1-af3a-d4daea62b647","Type":"ContainerStarted","Data":"9b138b79880e6f8c2756a1d3f1a97010b96d3cc9ec5fcdef20a808ae5f71f4cd"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.096215 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.097517 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5525f79d-14ec-480a-b69f-7c003fdc6dba","Type":"ContainerStarted","Data":"7aae88c8b24be4ef66d92323470c3f287865679d00bfbbabc0418868b7bc145a"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.099164 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mj5f2" event={"ID":"47de946e-512c-49a6-b633-c3c1640fb4bb","Type":"ContainerStarted","Data":"60868f9cdb135de498a4a0f59c328a0dfc582b5d71268296c873dd11697ff544"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.101660 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" event={"ID":"44c484ff-07fe-4a1b-a9de-ad08897f468f","Type":"ContainerStarted","Data":"39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.102053 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.102861 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ce5bdc2-44a6-4071-8df7-736a1992fee0","Type":"ContainerStarted","Data":"a04986703d26ee9a9b6a6f8ae72a0bb88b095d227725dff048fe5cb292ee0317"} Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.120307 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" podStartSLOduration=8.472499645 podStartE2EDuration="18.120285782s" podCreationTimestamp="2026-01-22 10:54:11 +0000 UTC" firstStartedPulling="2026-01-22 10:54:17.754672844 +0000 UTC m=+1050.412708964" lastFinishedPulling="2026-01-22 10:54:27.402458991 +0000 UTC m=+1060.060495101" observedRunningTime="2026-01-22 10:54:29.114094483 +0000 UTC m=+1061.772130593" watchObservedRunningTime="2026-01-22 10:54:29.120285782 +0000 UTC m=+1061.778321892" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.134719 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" podStartSLOduration=2.736881145 podStartE2EDuration="18.134700406s" podCreationTimestamp="2026-01-22 10:54:11 +0000 UTC" firstStartedPulling="2026-01-22 10:54:12.024650526 +0000 UTC m=+1044.682686636" lastFinishedPulling="2026-01-22 10:54:27.422469797 +0000 UTC m=+1060.080505897" observedRunningTime="2026-01-22 10:54:29.130212484 +0000 UTC m=+1061.788248594" watchObservedRunningTime="2026-01-22 10:54:29.134700406 +0000 UTC m=+1061.792736516" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.323875 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jqvh9"] Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.556746 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.562951 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.649636 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-config\") pod \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.649701 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-dns-svc\") pod \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.649826 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96431602-4d51-4687-83da-151eb56c4eb3-config\") pod \"96431602-4d51-4687-83da-151eb56c4eb3\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.649906 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhdh\" (UniqueName: \"kubernetes.io/projected/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-kube-api-access-bhhdh\") pod \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\" (UID: \"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9\") " Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.649945 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfsgd\" (UniqueName: \"kubernetes.io/projected/96431602-4d51-4687-83da-151eb56c4eb3-kube-api-access-bfsgd\") pod \"96431602-4d51-4687-83da-151eb56c4eb3\" (UID: \"96431602-4d51-4687-83da-151eb56c4eb3\") " Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.650129 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9" (UID: "cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.650183 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-config" (OuterVolumeSpecName: "config") pod "cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9" (UID: "cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.650440 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.650459 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.651254 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96431602-4d51-4687-83da-151eb56c4eb3-config" (OuterVolumeSpecName: "config") pod "96431602-4d51-4687-83da-151eb56c4eb3" (UID: "96431602-4d51-4687-83da-151eb56c4eb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.655514 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-kube-api-access-bhhdh" (OuterVolumeSpecName: "kube-api-access-bhhdh") pod "cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9" (UID: "cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9"). InnerVolumeSpecName "kube-api-access-bhhdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.655565 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96431602-4d51-4687-83da-151eb56c4eb3-kube-api-access-bfsgd" (OuterVolumeSpecName: "kube-api-access-bfsgd") pod "96431602-4d51-4687-83da-151eb56c4eb3" (UID: "96431602-4d51-4687-83da-151eb56c4eb3"). InnerVolumeSpecName "kube-api-access-bfsgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.751947 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96431602-4d51-4687-83da-151eb56c4eb3-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.751979 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhhdh\" (UniqueName: \"kubernetes.io/projected/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9-kube-api-access-bhhdh\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:29 crc kubenswrapper[4975]: I0122 10:54:29.751991 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfsgd\" (UniqueName: \"kubernetes.io/projected/96431602-4d51-4687-83da-151eb56c4eb3-kube-api-access-bfsgd\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.111979 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jqvh9" event={"ID":"90f8cfc5-4b43-4d89-8c42-a39306a0f1df","Type":"ContainerStarted","Data":"6c00b801a1cc3b0cb22e377ffb596443e31580ecf86b1e8ca6ae793b4ee893dc"} Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.113030 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a50b6032-49e5-467e-9728-b4c43293dbcf","Type":"ContainerStarted","Data":"9d5863518982bc86dfd4cdcd13ad203a894e6f93380c65667801d8619b259544"} Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.113990 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" event={"ID":"cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9","Type":"ContainerDied","Data":"b6dfe8b2547fce927ca7bad57c2a1a193202326b3a4e3d0239cc8bcadfe6ca68"} Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.114036 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gjvh" Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.115209 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" event={"ID":"96431602-4d51-4687-83da-151eb56c4eb3","Type":"ContainerDied","Data":"3a4a75ca484e9c9cbfbe8690190be6ddc94955c71e8fd9f06806f93ee342ad21"} Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.115227 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s9bqp" Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.154917 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s9bqp"] Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.159942 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s9bqp"] Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.180504 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gjvh"] Jan 22 10:54:30 crc kubenswrapper[4975]: I0122 10:54:30.185668 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gjvh"] Jan 22 10:54:31 crc kubenswrapper[4975]: I0122 10:54:31.765961 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96431602-4d51-4687-83da-151eb56c4eb3" path="/var/lib/kubelet/pods/96431602-4d51-4687-83da-151eb56c4eb3/volumes" Jan 22 10:54:31 crc kubenswrapper[4975]: I0122 10:54:31.766309 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9" path="/var/lib/kubelet/pods/cdb1b7d4-b919-40e3-a9c3-a9f3d7ffb3f9/volumes" Jan 22 10:54:33 crc kubenswrapper[4975]: I0122 10:54:33.134830 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f6d809d7-faf1-4033-a986-f49f903d36f1","Type":"ContainerStarted","Data":"b1e56b2bbd622293def13c8fb95743a116c371eb206520ef8ef42da886e16200"} Jan 22 10:54:34 crc kubenswrapper[4975]: I0122 10:54:34.143505 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9","Type":"ContainerStarted","Data":"5602f3dd0b6da7296455d135a004b394e4a988770a31398c9a509358b5a9efde"} Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.162787 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"be23c045-c637-4832-8944-ed6250b0d8b3","Type":"ContainerStarted","Data":"b572079d5df8d0dd6fc8e50919ce231fd9afd70997e377e23ee67f64c9525350"} Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.169076 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621048f0-ccdc-4953-97d4-ba5286ff3345","Type":"ContainerStarted","Data":"41562d16f008561f2ed7a88674c6fa4a2d575b226229459f0ac1e97dcc644095"} Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.170893 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5525f79d-14ec-480a-b69f-7c003fdc6dba","Type":"ContainerStarted","Data":"34150c3e4c8749cacb21ddf8435047f2ebb1348916f1b662670c1eca8aab5d0d"} Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.171074 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.173050 4975 generic.go:334] "Generic (PLEG): container finished" podID="f6d809d7-faf1-4033-a986-f49f903d36f1" containerID="b1e56b2bbd622293def13c8fb95743a116c371eb206520ef8ef42da886e16200" exitCode=0 Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.174422 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f6d809d7-faf1-4033-a986-f49f903d36f1","Type":"ContainerDied","Data":"b1e56b2bbd622293def13c8fb95743a116c371eb206520ef8ef42da886e16200"} Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.259221 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.251771419 podStartE2EDuration="21.259198656s" podCreationTimestamp="2026-01-22 10:54:15 +0000 UTC" firstStartedPulling="2026-01-22 10:54:28.135492189 +0000 UTC m=+1060.793528299" lastFinishedPulling="2026-01-22 10:54:32.142919386 +0000 UTC m=+1064.800955536" observedRunningTime="2026-01-22 10:54:36.251084504 +0000 UTC m=+1068.909120624" watchObservedRunningTime="2026-01-22 10:54:36.259198656 +0000 UTC m=+1068.917234776" Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.575502 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.854531 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:54:36 crc kubenswrapper[4975]: I0122 10:54:36.908763 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qzlnj"] Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.182483 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ce5bdc2-44a6-4071-8df7-736a1992fee0","Type":"ContainerStarted","Data":"d730c4dd4589a2f525103d9f045314fd03274a65a6dc99d97fa6219451be86d4"} Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.184050 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fb31a1a0-e358-436c-b4e3-9bd699096030","Type":"ContainerStarted","Data":"5dbda683ca515651b23b2d384cfe18ce78217bb970e3ccd5cfcf3df79057f044"} Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.184115 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.185611 4975 generic.go:334] "Generic (PLEG): container finished" podID="90f8cfc5-4b43-4d89-8c42-a39306a0f1df" containerID="2ab5f722b1522a9436a52edfaa1cc4c4f5954761ca946631daf524a8cfc6fd97" exitCode=0 Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.185684 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jqvh9" event={"ID":"90f8cfc5-4b43-4d89-8c42-a39306a0f1df","Type":"ContainerDied","Data":"2ab5f722b1522a9436a52edfaa1cc4c4f5954761ca946631daf524a8cfc6fd97"} Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.186742 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a50b6032-49e5-467e-9728-b4c43293dbcf","Type":"ContainerStarted","Data":"76b5bb7c4d1e4ffa908cad2c4b02c1e55e23573e5fb53399def6e3cd3945acdc"} Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.189426 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f6d809d7-faf1-4033-a986-f49f903d36f1","Type":"ContainerStarted","Data":"96bd2dfa659f4225ace405aab35809a6d63cba4b3ca2fca19079ec5c29f94cce"} Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.190885 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mj5f2" event={"ID":"47de946e-512c-49a6-b633-c3c1640fb4bb","Type":"ContainerStarted","Data":"3416ed7b30119f64b67d0b7f6eb127d102db55c444898c0888f291973394ff11"} Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.191921 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerName="dnsmasq-dns" containerID="cri-o://39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6" gracePeriod=10 Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.203308 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.14761016 podStartE2EDuration="20.203262167s" podCreationTimestamp="2026-01-22 10:54:17 +0000 UTC" firstStartedPulling="2026-01-22 10:54:28.158069076 +0000 UTC m=+1060.816105186" lastFinishedPulling="2026-01-22 10:54:36.213721083 +0000 UTC m=+1068.871757193" observedRunningTime="2026-01-22 10:54:37.195997198 +0000 UTC m=+1069.854033308" watchObservedRunningTime="2026-01-22 10:54:37.203262167 +0000 UTC m=+1069.861298277" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.223139 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mj5f2" podStartSLOduration=8.398427827999999 podStartE2EDuration="16.223124699s" podCreationTimestamp="2026-01-22 10:54:21 +0000 UTC" firstStartedPulling="2026-01-22 10:54:28.273929819 +0000 UTC m=+1060.931965929" lastFinishedPulling="2026-01-22 10:54:36.09862669 +0000 UTC m=+1068.756662800" observedRunningTime="2026-01-22 10:54:37.216290622 +0000 UTC m=+1069.874326732" watchObservedRunningTime="2026-01-22 10:54:37.223124699 +0000 UTC m=+1069.881160809" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.593754 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.615246 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.710360992 podStartE2EDuration="25.615229757s" podCreationTimestamp="2026-01-22 10:54:12 +0000 UTC" firstStartedPulling="2026-01-22 10:54:27.234503434 +0000 UTC m=+1059.892539544" lastFinishedPulling="2026-01-22 10:54:32.139372199 +0000 UTC m=+1064.797408309" observedRunningTime="2026-01-22 10:54:37.262874694 +0000 UTC m=+1069.920910804" watchObservedRunningTime="2026-01-22 10:54:37.615229757 +0000 UTC m=+1070.273265867" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.783175 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-dns-svc\") pod \"44c484ff-07fe-4a1b-a9de-ad08897f468f\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.783262 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbbrs\" (UniqueName: \"kubernetes.io/projected/44c484ff-07fe-4a1b-a9de-ad08897f468f-kube-api-access-dbbrs\") pod \"44c484ff-07fe-4a1b-a9de-ad08897f468f\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.783299 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-config\") pod \"44c484ff-07fe-4a1b-a9de-ad08897f468f\" (UID: \"44c484ff-07fe-4a1b-a9de-ad08897f468f\") " Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.796666 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c484ff-07fe-4a1b-a9de-ad08897f468f-kube-api-access-dbbrs" (OuterVolumeSpecName: "kube-api-access-dbbrs") pod "44c484ff-07fe-4a1b-a9de-ad08897f468f" (UID: "44c484ff-07fe-4a1b-a9de-ad08897f468f"). InnerVolumeSpecName "kube-api-access-dbbrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.819558 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44c484ff-07fe-4a1b-a9de-ad08897f468f" (UID: "44c484ff-07fe-4a1b-a9de-ad08897f468f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.821487 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-config" (OuterVolumeSpecName: "config") pod "44c484ff-07fe-4a1b-a9de-ad08897f468f" (UID: "44c484ff-07fe-4a1b-a9de-ad08897f468f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.885803 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.885846 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbbrs\" (UniqueName: \"kubernetes.io/projected/44c484ff-07fe-4a1b-a9de-ad08897f468f-kube-api-access-dbbrs\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:37 crc kubenswrapper[4975]: I0122 10:54:37.886174 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c484ff-07fe-4a1b-a9de-ad08897f468f-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.207887 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jqvh9" event={"ID":"90f8cfc5-4b43-4d89-8c42-a39306a0f1df","Type":"ContainerStarted","Data":"9b38a92b93b93e9d0d09b9b6807b8ad0a64dacc8ee46c136bccb63663799d756"} Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.208276 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jqvh9" event={"ID":"90f8cfc5-4b43-4d89-8c42-a39306a0f1df","Type":"ContainerStarted","Data":"960e80031ccb95ac6e6b997c5ee65a5ed439b68d23b12b8b93ed8c12808ff2d9"} Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.209747 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.209785 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.214103 4975 generic.go:334] "Generic (PLEG): container finished" podID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerID="39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6" exitCode=0 Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.214158 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" event={"ID":"44c484ff-07fe-4a1b-a9de-ad08897f468f","Type":"ContainerDied","Data":"39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6"} Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.214191 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.214213 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qzlnj" event={"ID":"44c484ff-07fe-4a1b-a9de-ad08897f468f","Type":"ContainerDied","Data":"9ffc12c9c77950bdaa0a8ad0387537d03f15507cd09fae8a7db8e20fa7d78b6b"} Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.214248 4975 scope.go:117] "RemoveContainer" containerID="39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.215062 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mj5f2" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.263780 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jqvh9" podStartSLOduration=10.674135905 podStartE2EDuration="17.263757767s" podCreationTimestamp="2026-01-22 10:54:21 +0000 UTC" firstStartedPulling="2026-01-22 10:54:29.508154415 +0000 UTC m=+1062.166190545" lastFinishedPulling="2026-01-22 10:54:36.097776297 +0000 UTC m=+1068.755812407" observedRunningTime="2026-01-22 10:54:38.246855605 +0000 UTC m=+1070.904891735" watchObservedRunningTime="2026-01-22 10:54:38.263757767 +0000 UTC m=+1070.921793887" Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.266629 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qzlnj"] Jan 22 10:54:38 crc kubenswrapper[4975]: I0122 10:54:38.279641 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qzlnj"] Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.228003 4975 generic.go:334] "Generic (PLEG): container finished" podID="621048f0-ccdc-4953-97d4-ba5286ff3345" containerID="41562d16f008561f2ed7a88674c6fa4a2d575b226229459f0ac1e97dcc644095" exitCode=0 Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.228107 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621048f0-ccdc-4953-97d4-ba5286ff3345","Type":"ContainerDied","Data":"41562d16f008561f2ed7a88674c6fa4a2d575b226229459f0ac1e97dcc644095"} Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.340293 4975 scope.go:117] "RemoveContainer" containerID="c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf" Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.777411 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" path="/var/lib/kubelet/pods/44c484ff-07fe-4a1b-a9de-ad08897f468f/volumes" Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.913478 4975 scope.go:117] "RemoveContainer" containerID="39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6" Jan 22 10:54:39 crc kubenswrapper[4975]: E0122 10:54:39.914134 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6\": container with ID starting with 39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6 not found: ID does not exist" containerID="39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6" Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.914207 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6"} err="failed to get container status \"39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6\": rpc error: code = NotFound desc = could not find container \"39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6\": container with ID starting with 39debceee602ba5c827b9e29deb6af9e8f2b9b24ae877e74b7e54dd46d28eda6 not found: ID does not exist" Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.914253 4975 scope.go:117] "RemoveContainer" containerID="c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf" Jan 22 10:54:39 crc kubenswrapper[4975]: E0122 10:54:39.914947 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf\": container with ID starting with c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf not found: ID does not exist" containerID="c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf" Jan 22 10:54:39 crc kubenswrapper[4975]: I0122 10:54:39.915005 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf"} err="failed to get container status \"c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf\": rpc error: code = NotFound desc = could not find container \"c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf\": container with ID starting with c2a73ac83f67070c32d06a83ac27ecc8dd8323be5d4cad1e261434291cff00cf not found: ID does not exist" Jan 22 10:54:40 crc kubenswrapper[4975]: I0122 10:54:40.897988 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 10:54:41 crc kubenswrapper[4975]: I0122 10:54:41.248940 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621048f0-ccdc-4953-97d4-ba5286ff3345","Type":"ContainerStarted","Data":"9fba7a0085b6474056845a9d3856de3adbc6457b3f3b6db79d61840b2f865c1b"} Jan 22 10:54:41 crc kubenswrapper[4975]: I0122 10:54:41.279094 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.838688151 podStartE2EDuration="27.279068481s" podCreationTimestamp="2026-01-22 10:54:14 +0000 UTC" firstStartedPulling="2026-01-22 10:54:27.736283177 +0000 UTC m=+1060.394319287" lastFinishedPulling="2026-01-22 10:54:32.176663487 +0000 UTC m=+1064.834699617" observedRunningTime="2026-01-22 10:54:41.274675572 +0000 UTC m=+1073.932711702" watchObservedRunningTime="2026-01-22 10:54:41.279068481 +0000 UTC m=+1073.937104631" Jan 22 10:54:44 crc kubenswrapper[4975]: I0122 10:54:44.223760 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 10:54:44 crc kubenswrapper[4975]: I0122 10:54:44.228948 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 10:54:44 crc kubenswrapper[4975]: E0122 10:54:44.922259 4975 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:60874->38.102.83.51:37165: write tcp 38.102.83.51:60874->38.102.83.51:37165: write: broken pipe Jan 22 10:54:45 crc kubenswrapper[4975]: I0122 10:54:45.601386 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:45 crc kubenswrapper[4975]: I0122 10:54:45.601449 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.203216 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.271662 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.294174 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a50b6032-49e5-467e-9728-b4c43293dbcf","Type":"ContainerStarted","Data":"90d6c512e145895ac219f05d22aba81f9baf9210066091e6caba176dfa41beff"} Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.296147 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3ce5bdc2-44a6-4071-8df7-736a1992fee0","Type":"ContainerStarted","Data":"78cf742e21c64a14f5bfb89f93102eebf811e7cacf27d35b38e000487ae01d83"} Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.341898 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.091853475 podStartE2EDuration="23.341855998s" podCreationTimestamp="2026-01-22 10:54:24 +0000 UTC" firstStartedPulling="2026-01-22 10:54:29.508088453 +0000 UTC m=+1062.166124563" lastFinishedPulling="2026-01-22 10:54:46.758090946 +0000 UTC m=+1079.416127086" observedRunningTime="2026-01-22 10:54:47.33971395 +0000 UTC m=+1079.997750060" watchObservedRunningTime="2026-01-22 10:54:47.341855998 +0000 UTC m=+1079.999892108" Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.361432 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.534235951 podStartE2EDuration="26.361416373s" podCreationTimestamp="2026-01-22 10:54:21 +0000 UTC" firstStartedPulling="2026-01-22 10:54:28.265209601 +0000 UTC m=+1060.923245711" lastFinishedPulling="2026-01-22 10:54:42.092390023 +0000 UTC m=+1074.750426133" observedRunningTime="2026-01-22 10:54:47.357286969 +0000 UTC m=+1080.015323079" watchObservedRunningTime="2026-01-22 10:54:47.361416373 +0000 UTC m=+1080.019452483" Jan 22 10:54:47 crc kubenswrapper[4975]: I0122 10:54:47.453143 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.135543 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-ntnkr"] Jan 22 10:54:48 crc kubenswrapper[4975]: E0122 10:54:48.135834 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerName="init" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.135847 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerName="init" Jan 22 10:54:48 crc kubenswrapper[4975]: E0122 10:54:48.135872 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerName="dnsmasq-dns" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.135878 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerName="dnsmasq-dns" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.136033 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c484ff-07fe-4a1b-a9de-ad08897f468f" containerName="dnsmasq-dns" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.136723 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.151306 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-ntnkr"] Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.183823 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.183925 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphhd\" (UniqueName: \"kubernetes.io/projected/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-kube-api-access-xphhd\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.183969 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-config\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.185833 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.285465 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-config\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.285889 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.285973 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphhd\" (UniqueName: \"kubernetes.io/projected/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-kube-api-access-xphhd\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.286850 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-config\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.287494 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.305417 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphhd\" (UniqueName: \"kubernetes.io/projected/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-kube-api-access-xphhd\") pod \"dnsmasq-dns-7cb5889db5-ntnkr\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.449802 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.666661 4975 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-42cdr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.667125 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42cdr" podUID="befe9235-f8a1-4fda-ace8-2752310375e3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:54:48 crc kubenswrapper[4975]: I0122 10:54:48.927253 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-ntnkr"] Jan 22 10:54:48 crc kubenswrapper[4975]: W0122 10:54:48.927982 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e5cf2cd_d06c_40ee_b9b2_06b1342ae3c6.slice/crio-2bccb611d5a4df395a7bcc5354a7464de3dae22a3d2b6d099af750568f4ed1a2 WatchSource:0}: Error finding container 2bccb611d5a4df395a7bcc5354a7464de3dae22a3d2b6d099af750568f4ed1a2: Status 404 returned error can't find the container with id 2bccb611d5a4df395a7bcc5354a7464de3dae22a3d2b6d099af750568f4ed1a2 Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.300631 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.306884 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.309746 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.310476 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.310813 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-h4999" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.314867 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.326629 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" event={"ID":"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6","Type":"ContainerStarted","Data":"2bccb611d5a4df395a7bcc5354a7464de3dae22a3d2b6d099af750568f4ed1a2"} Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.329363 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.403694 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258efd08-8841-42a5-bfff-035926fa4a8a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.403875 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/258efd08-8841-42a5-bfff-035926fa4a8a-lock\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.403949 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.404118 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttzd\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-kube-api-access-9ttzd\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.404307 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.404382 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/258efd08-8841-42a5-bfff-035926fa4a8a-cache\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.452879 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.506145 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258efd08-8841-42a5-bfff-035926fa4a8a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.506502 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/258efd08-8841-42a5-bfff-035926fa4a8a-lock\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.506532 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.506601 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttzd\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-kube-api-access-9ttzd\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.506648 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.506672 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/258efd08-8841-42a5-bfff-035926fa4a8a-cache\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.507049 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: E0122 10:54:49.507295 4975 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:54:49 crc kubenswrapper[4975]: E0122 10:54:49.507434 4975 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:54:49 crc kubenswrapper[4975]: E0122 10:54:49.507547 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift podName:258efd08-8841-42a5-bfff-035926fa4a8a nodeName:}" failed. No retries permitted until 2026-01-22 10:54:50.007496499 +0000 UTC m=+1082.665532649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift") pod "swift-storage-0" (UID: "258efd08-8841-42a5-bfff-035926fa4a8a") : configmap "swift-ring-files" not found Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.532975 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.582709 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/258efd08-8841-42a5-bfff-035926fa4a8a-cache\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.586029 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/258efd08-8841-42a5-bfff-035926fa4a8a-lock\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.593529 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258efd08-8841-42a5-bfff-035926fa4a8a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.594309 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttzd\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-kube-api-access-9ttzd\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.597729 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.671672 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.715410 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.807843 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-d82q2"] Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.809427 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.812451 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.812528 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.812772 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.836987 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-d82q2"] Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.852076 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7kqbs"] Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.854375 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.872130 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-d82q2"] Jan 22 10:54:49 crc kubenswrapper[4975]: E0122 10:54:49.872699 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-nn6b5 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-d82q2" podUID="3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.880804 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7kqbs"] Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.913672 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-swiftconf\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.913786 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-combined-ca-bundle\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.913872 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-etc-swift\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.913902 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6b5\" (UniqueName: \"kubernetes.io/projected/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-kube-api-access-nn6b5\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.914030 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-dispersionconf\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.914240 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-ring-data-devices\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:49 crc kubenswrapper[4975]: I0122 10:54:49.914320 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-scripts\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016014 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-combined-ca-bundle\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016077 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-etc-swift\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016107 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6b5\" (UniqueName: \"kubernetes.io/projected/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-kube-api-access-nn6b5\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016135 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-dispersionconf\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016166 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ee721893-cf9b-4712-b421-19d6a59d7cf7-etc-swift\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016188 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-swiftconf\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016221 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-ring-data-devices\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016249 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-dispersionconf\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016280 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-scripts\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016302 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-ring-data-devices\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016348 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-combined-ca-bundle\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016370 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-scripts\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016390 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-swiftconf\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016415 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.016440 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zjfl\" (UniqueName: \"kubernetes.io/projected/ee721893-cf9b-4712-b421-19d6a59d7cf7-kube-api-access-2zjfl\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.017096 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-ring-data-devices\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.017552 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-scripts\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: E0122 10:54:50.017631 4975 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:54:50 crc kubenswrapper[4975]: E0122 10:54:50.017645 4975 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:54:50 crc kubenswrapper[4975]: E0122 10:54:50.017678 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift podName:258efd08-8841-42a5-bfff-035926fa4a8a nodeName:}" failed. No retries permitted until 2026-01-22 10:54:51.017667091 +0000 UTC m=+1083.675703201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift") pod "swift-storage-0" (UID: "258efd08-8841-42a5-bfff-035926fa4a8a") : configmap "swift-ring-files" not found Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.018523 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-etc-swift\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.020766 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-dispersionconf\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.024767 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-combined-ca-bundle\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.025951 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-swiftconf\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.034258 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6b5\" (UniqueName: \"kubernetes.io/projected/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-kube-api-access-nn6b5\") pod \"swift-ring-rebalance-d82q2\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.118708 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ee721893-cf9b-4712-b421-19d6a59d7cf7-etc-swift\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.118771 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-swiftconf\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.118821 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-ring-data-devices\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.118866 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-dispersionconf\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.118898 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-scripts\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.118959 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-combined-ca-bundle\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.119034 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zjfl\" (UniqueName: \"kubernetes.io/projected/ee721893-cf9b-4712-b421-19d6a59d7cf7-kube-api-access-2zjfl\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.119994 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ee721893-cf9b-4712-b421-19d6a59d7cf7-etc-swift\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.120467 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-scripts\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.121201 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-ring-data-devices\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.122359 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-dispersionconf\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.122624 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-combined-ca-bundle\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.124521 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-swiftconf\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.134888 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zjfl\" (UniqueName: \"kubernetes.io/projected/ee721893-cf9b-4712-b421-19d6a59d7cf7-kube-api-access-2zjfl\") pod \"swift-ring-rebalance-7kqbs\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.172711 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.332814 4975 generic.go:334] "Generic (PLEG): container finished" podID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerID="546caae73a01a554cd926a19528dba5dbfd99f6ea51fc37f7a84aee5b04b804b" exitCode=0 Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.332970 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" event={"ID":"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6","Type":"ContainerDied","Data":"546caae73a01a554cd926a19528dba5dbfd99f6ea51fc37f7a84aee5b04b804b"} Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.333109 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.334218 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.357830 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.382524 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.432935 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524060 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-ring-data-devices\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524116 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-swiftconf\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524193 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-scripts\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524221 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-dispersionconf\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524266 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-etc-swift\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524290 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6b5\" (UniqueName: \"kubernetes.io/projected/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-kube-api-access-nn6b5\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.524359 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-combined-ca-bundle\") pod \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\" (UID: \"3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f\") " Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.525003 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.526593 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.527200 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-scripts" (OuterVolumeSpecName: "scripts") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.532048 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-kube-api-access-nn6b5" (OuterVolumeSpecName: "kube-api-access-nn6b5") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "kube-api-access-nn6b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.532232 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.532422 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.535097 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" (UID: "3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.548768 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-ntnkr"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.569513 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-84fpl"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.570670 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.581016 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.601978 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-84fpl"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627121 4975 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627151 4975 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627160 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627169 4975 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627179 4975 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627188 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6b5\" (UniqueName: \"kubernetes.io/projected/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-kube-api-access-nn6b5\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.627196 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.651290 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7kqbs"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.711570 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kkcjz"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.712554 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.714257 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.728098 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.728162 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-config\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.728214 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.728234 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grl4s\" (UniqueName: \"kubernetes.io/projected/74e6912c-e07e-4126-b273-90b6c0f8990d-kube-api-access-grl4s\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.729771 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kkcjz"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.756159 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-84fpl"] Jan 22 10:54:50 crc kubenswrapper[4975]: E0122 10:54:50.757471 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-grl4s ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" podUID="74e6912c-e07e-4126-b273-90b6c0f8990d" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.789281 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7dw86"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.790545 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.795412 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7dw86"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.797234 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829366 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b366080b-ad75-4205-9257-9d8a8ac5aed8-combined-ca-bundle\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829434 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-config\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829501 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829532 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grl4s\" (UniqueName: \"kubernetes.io/projected/74e6912c-e07e-4126-b273-90b6c0f8990d-kube-api-access-grl4s\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829549 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b366080b-ad75-4205-9257-9d8a8ac5aed8-ovn-rundir\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829585 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b366080b-ad75-4205-9257-9d8a8ac5aed8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829610 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b366080b-ad75-4205-9257-9d8a8ac5aed8-config\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829663 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dgcl\" (UniqueName: \"kubernetes.io/projected/b366080b-ad75-4205-9257-9d8a8ac5aed8-kube-api-access-5dgcl\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829698 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b366080b-ad75-4205-9257-9d8a8ac5aed8-ovs-rundir\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.829716 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.830478 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.831119 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.832316 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-config\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.841945 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.843100 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.847108 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mwfvp" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.847268 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.847295 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.847443 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.849007 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grl4s\" (UniqueName: \"kubernetes.io/projected/74e6912c-e07e-4126-b273-90b6c0f8990d-kube-api-access-grl4s\") pod \"dnsmasq-dns-6c89d5d749-84fpl\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.853252 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930628 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930667 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b366080b-ad75-4205-9257-9d8a8ac5aed8-combined-ca-bundle\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930689 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d61e05-db24-4d01-bfc6-5ff85c254d7c-config\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930732 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-dns-svc\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930747 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930859 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-config\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.930912 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6vfn\" (UniqueName: \"kubernetes.io/projected/c13e808a-df7b-4367-b5e3-17aab90f3699-kube-api-access-b6vfn\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931000 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931019 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13d61e05-db24-4d01-bfc6-5ff85c254d7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931086 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b366080b-ad75-4205-9257-9d8a8ac5aed8-ovn-rundir\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931176 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d61e05-db24-4d01-bfc6-5ff85c254d7c-scripts\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931204 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b366080b-ad75-4205-9257-9d8a8ac5aed8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931270 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b366080b-ad75-4205-9257-9d8a8ac5aed8-config\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931367 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dgcl\" (UniqueName: \"kubernetes.io/projected/b366080b-ad75-4205-9257-9d8a8ac5aed8-kube-api-access-5dgcl\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931396 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931431 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd2t\" (UniqueName: \"kubernetes.io/projected/13d61e05-db24-4d01-bfc6-5ff85c254d7c-kube-api-access-qbd2t\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931461 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b366080b-ad75-4205-9257-9d8a8ac5aed8-ovs-rundir\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931480 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931902 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b366080b-ad75-4205-9257-9d8a8ac5aed8-ovs-rundir\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.931931 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b366080b-ad75-4205-9257-9d8a8ac5aed8-ovn-rundir\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.932105 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b366080b-ad75-4205-9257-9d8a8ac5aed8-config\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.933639 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b366080b-ad75-4205-9257-9d8a8ac5aed8-combined-ca-bundle\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.943832 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b366080b-ad75-4205-9257-9d8a8ac5aed8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:50 crc kubenswrapper[4975]: I0122 10:54:50.947424 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dgcl\" (UniqueName: \"kubernetes.io/projected/b366080b-ad75-4205-9257-9d8a8ac5aed8-kube-api-access-5dgcl\") pod \"ovn-controller-metrics-kkcjz\" (UID: \"b366080b-ad75-4205-9257-9d8a8ac5aed8\") " pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.027832 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kkcjz" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032770 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d61e05-db24-4d01-bfc6-5ff85c254d7c-scripts\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032842 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032866 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbd2t\" (UniqueName: \"kubernetes.io/projected/13d61e05-db24-4d01-bfc6-5ff85c254d7c-kube-api-access-qbd2t\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032889 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032906 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032926 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d61e05-db24-4d01-bfc6-5ff85c254d7c-config\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032951 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.032984 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-dns-svc\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033000 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033023 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-config\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033041 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6vfn\" (UniqueName: \"kubernetes.io/projected/c13e808a-df7b-4367-b5e3-17aab90f3699-kube-api-access-b6vfn\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033074 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033090 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13d61e05-db24-4d01-bfc6-5ff85c254d7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033432 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13d61e05-db24-4d01-bfc6-5ff85c254d7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033915 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13d61e05-db24-4d01-bfc6-5ff85c254d7c-scripts\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.033963 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-config\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: E0122 10:54:51.034004 4975 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:54:51 crc kubenswrapper[4975]: E0122 10:54:51.034016 4975 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:54:51 crc kubenswrapper[4975]: E0122 10:54:51.034049 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift podName:258efd08-8841-42a5-bfff-035926fa4a8a nodeName:}" failed. No retries permitted until 2026-01-22 10:54:53.034037667 +0000 UTC m=+1085.692073777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift") pod "swift-storage-0" (UID: "258efd08-8841-42a5-bfff-035926fa4a8a") : configmap "swift-ring-files" not found Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.034806 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d61e05-db24-4d01-bfc6-5ff85c254d7c-config\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.035829 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.035961 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.036048 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-dns-svc\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.036557 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.039664 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.040394 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d61e05-db24-4d01-bfc6-5ff85c254d7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.052916 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbd2t\" (UniqueName: \"kubernetes.io/projected/13d61e05-db24-4d01-bfc6-5ff85c254d7c-kube-api-access-qbd2t\") pod \"ovn-northd-0\" (UID: \"13d61e05-db24-4d01-bfc6-5ff85c254d7c\") " pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.054469 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6vfn\" (UniqueName: \"kubernetes.io/projected/c13e808a-df7b-4367-b5e3-17aab90f3699-kube-api-access-b6vfn\") pod \"dnsmasq-dns-698758b865-7dw86\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.106560 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.193961 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.212436 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vqxns"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.213722 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.222821 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ce3e-account-create-update-pqlkf"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.224103 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.228387 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vqxns"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.229157 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.245754 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ce3e-account-create-update-pqlkf"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.338536 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9507c-123d-4f8d-ae32-c8ffefc98594-operator-scripts\") pod \"glance-ce3e-account-create-update-pqlkf\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.338577 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc2626f-8d55-44c9-8478-cafd40077961-operator-scripts\") pod \"glance-db-create-vqxns\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.338657 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldd4\" (UniqueName: \"kubernetes.io/projected/2dc2626f-8d55-44c9-8478-cafd40077961-kube-api-access-wldd4\") pod \"glance-db-create-vqxns\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.338717 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzjz\" (UniqueName: \"kubernetes.io/projected/16f9507c-123d-4f8d-ae32-c8ffefc98594-kube-api-access-wzzjz\") pod \"glance-ce3e-account-create-update-pqlkf\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.344592 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" event={"ID":"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6","Type":"ContainerStarted","Data":"9a00b3b787f9d92056118867718da7f6a6af2210922bd40154084d59de3b5af5"} Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.344743 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.346403 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.346414 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d82q2" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.346434 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kqbs" event={"ID":"ee721893-cf9b-4712-b421-19d6a59d7cf7","Type":"ContainerStarted","Data":"07d0cca3e77f9dda62785ecdc089edb29d57be2e222716757a1f4911edbaab10"} Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.355389 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.365975 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" podStartSLOduration=3.36595885 podStartE2EDuration="3.36595885s" podCreationTimestamp="2026-01-22 10:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:54:51.365777805 +0000 UTC m=+1084.023813915" watchObservedRunningTime="2026-01-22 10:54:51.36595885 +0000 UTC m=+1084.023994950" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.401535 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-d82q2"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.408248 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-d82q2"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.439682 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-dns-svc\") pod \"74e6912c-e07e-4126-b273-90b6c0f8990d\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.439813 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-ovsdbserver-sb\") pod \"74e6912c-e07e-4126-b273-90b6c0f8990d\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.439861 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-config\") pod \"74e6912c-e07e-4126-b273-90b6c0f8990d\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.439882 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grl4s\" (UniqueName: \"kubernetes.io/projected/74e6912c-e07e-4126-b273-90b6c0f8990d-kube-api-access-grl4s\") pod \"74e6912c-e07e-4126-b273-90b6c0f8990d\" (UID: \"74e6912c-e07e-4126-b273-90b6c0f8990d\") " Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.440267 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74e6912c-e07e-4126-b273-90b6c0f8990d" (UID: "74e6912c-e07e-4126-b273-90b6c0f8990d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.440375 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-config" (OuterVolumeSpecName: "config") pod "74e6912c-e07e-4126-b273-90b6c0f8990d" (UID: "74e6912c-e07e-4126-b273-90b6c0f8990d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.440478 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74e6912c-e07e-4126-b273-90b6c0f8990d" (UID: "74e6912c-e07e-4126-b273-90b6c0f8990d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.440731 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzjz\" (UniqueName: \"kubernetes.io/projected/16f9507c-123d-4f8d-ae32-c8ffefc98594-kube-api-access-wzzjz\") pod \"glance-ce3e-account-create-update-pqlkf\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.440881 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9507c-123d-4f8d-ae32-c8ffefc98594-operator-scripts\") pod \"glance-ce3e-account-create-update-pqlkf\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.441567 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc2626f-8d55-44c9-8478-cafd40077961-operator-scripts\") pod \"glance-db-create-vqxns\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.441783 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9507c-123d-4f8d-ae32-c8ffefc98594-operator-scripts\") pod \"glance-ce3e-account-create-update-pqlkf\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.443065 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc2626f-8d55-44c9-8478-cafd40077961-operator-scripts\") pod \"glance-db-create-vqxns\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.443212 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldd4\" (UniqueName: \"kubernetes.io/projected/2dc2626f-8d55-44c9-8478-cafd40077961-kube-api-access-wldd4\") pod \"glance-db-create-vqxns\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.443312 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.443350 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.443361 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e6912c-e07e-4126-b273-90b6c0f8990d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.453825 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e6912c-e07e-4126-b273-90b6c0f8990d-kube-api-access-grl4s" (OuterVolumeSpecName: "kube-api-access-grl4s") pod "74e6912c-e07e-4126-b273-90b6c0f8990d" (UID: "74e6912c-e07e-4126-b273-90b6c0f8990d"). InnerVolumeSpecName "kube-api-access-grl4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.458436 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzjz\" (UniqueName: \"kubernetes.io/projected/16f9507c-123d-4f8d-ae32-c8ffefc98594-kube-api-access-wzzjz\") pod \"glance-ce3e-account-create-update-pqlkf\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.461719 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldd4\" (UniqueName: \"kubernetes.io/projected/2dc2626f-8d55-44c9-8478-cafd40077961-kube-api-access-wldd4\") pod \"glance-db-create-vqxns\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.532827 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kkcjz"] Jan 22 10:54:51 crc kubenswrapper[4975]: W0122 10:54:51.540227 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb366080b_ad75_4205_9257_9d8a8ac5aed8.slice/crio-9d0fef541a7ffefe50e033ebad08880162cf545a571f0e149da6c15905157c93 WatchSource:0}: Error finding container 9d0fef541a7ffefe50e033ebad08880162cf545a571f0e149da6c15905157c93: Status 404 returned error can't find the container with id 9d0fef541a7ffefe50e033ebad08880162cf545a571f0e149da6c15905157c93 Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.545181 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grl4s\" (UniqueName: \"kubernetes.io/projected/74e6912c-e07e-4126-b273-90b6c0f8990d-kube-api-access-grl4s\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.546826 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vqxns" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.559501 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.708670 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7dw86"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.717790 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.739615 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.774165 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f" path="/var/lib/kubelet/pods/3c8ce52a-b6a9-4bb6-8867-937ac6fcf12f/volumes" Jan 22 10:54:51 crc kubenswrapper[4975]: I0122 10:54:51.876596 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.033286 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vqxns"] Jan 22 10:54:52 crc kubenswrapper[4975]: W0122 10:54:52.077638 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc2626f_8d55_44c9_8478_cafd40077961.slice/crio-5e1373eb51f0a87442c247027543d719f0efa40ede488be6e8d06a502a5679c5 WatchSource:0}: Error finding container 5e1373eb51f0a87442c247027543d719f0efa40ede488be6e8d06a502a5679c5: Status 404 returned error can't find the container with id 5e1373eb51f0a87442c247027543d719f0efa40ede488be6e8d06a502a5679c5 Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.116545 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ce3e-account-create-update-pqlkf"] Jan 22 10:54:52 crc kubenswrapper[4975]: W0122 10:54:52.121989 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f9507c_123d_4f8d_ae32_c8ffefc98594.slice/crio-7bf63feecdfca18692ceb2214315e2c3235bc3a0712846f5c2430b663a687409 WatchSource:0}: Error finding container 7bf63feecdfca18692ceb2214315e2c3235bc3a0712846f5c2430b663a687409: Status 404 returned error can't find the container with id 7bf63feecdfca18692ceb2214315e2c3235bc3a0712846f5c2430b663a687409 Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.354232 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ce3e-account-create-update-pqlkf" event={"ID":"16f9507c-123d-4f8d-ae32-c8ffefc98594","Type":"ContainerStarted","Data":"7bf63feecdfca18692ceb2214315e2c3235bc3a0712846f5c2430b663a687409"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.356581 4975 generic.go:334] "Generic (PLEG): container finished" podID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerID="d141807d6443e9bfe1060d5fe3ca52904b08d6e1149ce88632d6ae4c94fb0003" exitCode=0 Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.356649 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7dw86" event={"ID":"c13e808a-df7b-4367-b5e3-17aab90f3699","Type":"ContainerDied","Data":"d141807d6443e9bfe1060d5fe3ca52904b08d6e1149ce88632d6ae4c94fb0003"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.356673 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7dw86" event={"ID":"c13e808a-df7b-4367-b5e3-17aab90f3699","Type":"ContainerStarted","Data":"f0f2277de2e8d59765aed493426fd3a2f2f0ed0abdf51be53d9da8e2d514f0ef"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.358355 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kkcjz" event={"ID":"b366080b-ad75-4205-9257-9d8a8ac5aed8","Type":"ContainerStarted","Data":"6183d58d2bc933b53bb2f3c33564195317052ac5c9a1ab9572f4b2f67f249064"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.358390 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kkcjz" event={"ID":"b366080b-ad75-4205-9257-9d8a8ac5aed8","Type":"ContainerStarted","Data":"9d0fef541a7ffefe50e033ebad08880162cf545a571f0e149da6c15905157c93"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.359419 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"13d61e05-db24-4d01-bfc6-5ff85c254d7c","Type":"ContainerStarted","Data":"5326511eb455ab5e2cfd78c986ad18dda269faa660b8139867902979ee7bfa49"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.360928 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-84fpl" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.360972 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vqxns" event={"ID":"2dc2626f-8d55-44c9-8478-cafd40077961","Type":"ContainerStarted","Data":"d6c3349ef786365d547d45984b689bb9eabedf3956d593a39c504f4d57e7cccc"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.360997 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vqxns" event={"ID":"2dc2626f-8d55-44c9-8478-cafd40077961","Type":"ContainerStarted","Data":"5e1373eb51f0a87442c247027543d719f0efa40ede488be6e8d06a502a5679c5"} Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.361585 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerName="dnsmasq-dns" containerID="cri-o://9a00b3b787f9d92056118867718da7f6a6af2210922bd40154084d59de3b5af5" gracePeriod=10 Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.393488 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vqxns" podStartSLOduration=1.39346697 podStartE2EDuration="1.39346697s" podCreationTimestamp="2026-01-22 10:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:54:52.388992498 +0000 UTC m=+1085.047028608" watchObservedRunningTime="2026-01-22 10:54:52.39346697 +0000 UTC m=+1085.051503080" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.457023 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-84fpl"] Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.466375 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-84fpl"] Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.466966 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kkcjz" podStartSLOduration=2.466954317 podStartE2EDuration="2.466954317s" podCreationTimestamp="2026-01-22 10:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:54:52.4648958 +0000 UTC m=+1085.122931910" watchObservedRunningTime="2026-01-22 10:54:52.466954317 +0000 UTC m=+1085.124990427" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.862425 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xwq5t"] Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.864371 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.868026 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.868863 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xwq5t"] Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.986314 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81869f8b-1f2a-481f-ba1d-8453c0c57555-operator-scripts\") pod \"root-account-create-update-xwq5t\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:52 crc kubenswrapper[4975]: I0122 10:54:52.986426 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhl84\" (UniqueName: \"kubernetes.io/projected/81869f8b-1f2a-481f-ba1d-8453c0c57555-kube-api-access-zhl84\") pod \"root-account-create-update-xwq5t\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.088230 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81869f8b-1f2a-481f-ba1d-8453c0c57555-operator-scripts\") pod \"root-account-create-update-xwq5t\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.088310 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhl84\" (UniqueName: \"kubernetes.io/projected/81869f8b-1f2a-481f-ba1d-8453c0c57555-kube-api-access-zhl84\") pod \"root-account-create-update-xwq5t\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.088410 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:53 crc kubenswrapper[4975]: E0122 10:54:53.088641 4975 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:54:53 crc kubenswrapper[4975]: E0122 10:54:53.088678 4975 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:54:53 crc kubenswrapper[4975]: E0122 10:54:53.088738 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift podName:258efd08-8841-42a5-bfff-035926fa4a8a nodeName:}" failed. No retries permitted until 2026-01-22 10:54:57.088718476 +0000 UTC m=+1089.746754596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift") pod "swift-storage-0" (UID: "258efd08-8841-42a5-bfff-035926fa4a8a") : configmap "swift-ring-files" not found Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.089364 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81869f8b-1f2a-481f-ba1d-8453c0c57555-operator-scripts\") pod \"root-account-create-update-xwq5t\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.113286 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhl84\" (UniqueName: \"kubernetes.io/projected/81869f8b-1f2a-481f-ba1d-8453c0c57555-kube-api-access-zhl84\") pod \"root-account-create-update-xwq5t\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.185571 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.375674 4975 generic.go:334] "Generic (PLEG): container finished" podID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerID="9a00b3b787f9d92056118867718da7f6a6af2210922bd40154084d59de3b5af5" exitCode=0 Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.376041 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" event={"ID":"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6","Type":"ContainerDied","Data":"9a00b3b787f9d92056118867718da7f6a6af2210922bd40154084d59de3b5af5"} Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.384632 4975 generic.go:334] "Generic (PLEG): container finished" podID="2dc2626f-8d55-44c9-8478-cafd40077961" containerID="d6c3349ef786365d547d45984b689bb9eabedf3956d593a39c504f4d57e7cccc" exitCode=0 Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.385456 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vqxns" event={"ID":"2dc2626f-8d55-44c9-8478-cafd40077961","Type":"ContainerDied","Data":"d6c3349ef786365d547d45984b689bb9eabedf3956d593a39c504f4d57e7cccc"} Jan 22 10:54:53 crc kubenswrapper[4975]: I0122 10:54:53.768958 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e6912c-e07e-4126-b273-90b6c0f8990d" path="/var/lib/kubelet/pods/74e6912c-e07e-4126-b273-90b6c0f8990d/volumes" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.510407 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.616142 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-config\") pod \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.618397 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphhd\" (UniqueName: \"kubernetes.io/projected/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-kube-api-access-xphhd\") pod \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.618458 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-dns-svc\") pod \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\" (UID: \"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6\") " Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.624741 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-kube-api-access-xphhd" (OuterVolumeSpecName: "kube-api-access-xphhd") pod "0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" (UID: "0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6"). InnerVolumeSpecName "kube-api-access-xphhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.658628 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-config" (OuterVolumeSpecName: "config") pod "0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" (UID: "0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.666466 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" (UID: "0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.720404 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xphhd\" (UniqueName: \"kubernetes.io/projected/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-kube-api-access-xphhd\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.720435 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.720445 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.860110 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vqxns" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.923013 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc2626f-8d55-44c9-8478-cafd40077961-operator-scripts\") pod \"2dc2626f-8d55-44c9-8478-cafd40077961\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.923842 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc2626f-8d55-44c9-8478-cafd40077961-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dc2626f-8d55-44c9-8478-cafd40077961" (UID: "2dc2626f-8d55-44c9-8478-cafd40077961"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.924004 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wldd4\" (UniqueName: \"kubernetes.io/projected/2dc2626f-8d55-44c9-8478-cafd40077961-kube-api-access-wldd4\") pod \"2dc2626f-8d55-44c9-8478-cafd40077961\" (UID: \"2dc2626f-8d55-44c9-8478-cafd40077961\") " Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.924770 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc2626f-8d55-44c9-8478-cafd40077961-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:54 crc kubenswrapper[4975]: I0122 10:54:54.933554 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc2626f-8d55-44c9-8478-cafd40077961-kube-api-access-wldd4" (OuterVolumeSpecName: "kube-api-access-wldd4") pod "2dc2626f-8d55-44c9-8478-cafd40077961" (UID: "2dc2626f-8d55-44c9-8478-cafd40077961"). InnerVolumeSpecName "kube-api-access-wldd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.025882 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wldd4\" (UniqueName: \"kubernetes.io/projected/2dc2626f-8d55-44c9-8478-cafd40077961-kube-api-access-wldd4\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.204210 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xwq5t"] Jan 22 10:54:55 crc kubenswrapper[4975]: W0122 10:54:55.219142 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81869f8b_1f2a_481f_ba1d_8453c0c57555.slice/crio-7a694abc5b1ac30c52c67bfa8264e3dabea3b245ac89046bb56c6658534e0745 WatchSource:0}: Error finding container 7a694abc5b1ac30c52c67bfa8264e3dabea3b245ac89046bb56c6658534e0745: Status 404 returned error can't find the container with id 7a694abc5b1ac30c52c67bfa8264e3dabea3b245ac89046bb56c6658534e0745 Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.409927 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"13d61e05-db24-4d01-bfc6-5ff85c254d7c","Type":"ContainerStarted","Data":"79732b222c3b7e429c1a3b0506240f65fa309e621533df7f4aee6f3aa1e122c6"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.410456 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"13d61e05-db24-4d01-bfc6-5ff85c254d7c","Type":"ContainerStarted","Data":"a18271177b101612473a1b9808161845c140a82759f7e68a9092ebd3e267ca88"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.412673 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.415571 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vqxns" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.416804 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vqxns" event={"ID":"2dc2626f-8d55-44c9-8478-cafd40077961","Type":"ContainerDied","Data":"5e1373eb51f0a87442c247027543d719f0efa40ede488be6e8d06a502a5679c5"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.416841 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1373eb51f0a87442c247027543d719f0efa40ede488be6e8d06a502a5679c5" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.424576 4975 generic.go:334] "Generic (PLEG): container finished" podID="16f9507c-123d-4f8d-ae32-c8ffefc98594" containerID="2deafc08e859d44a1a5d8a7587716e982c5ae6eb951bbeea2a680d96611cf705" exitCode=0 Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.424651 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ce3e-account-create-update-pqlkf" event={"ID":"16f9507c-123d-4f8d-ae32-c8ffefc98594","Type":"ContainerDied","Data":"2deafc08e859d44a1a5d8a7587716e982c5ae6eb951bbeea2a680d96611cf705"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.426635 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7dw86" event={"ID":"c13e808a-df7b-4367-b5e3-17aab90f3699","Type":"ContainerStarted","Data":"95b80824a79fea3e12f167b859314c6382cb9abd65e1b5f0aa7c2febd3ffd38e"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.427616 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.429671 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kqbs" event={"ID":"ee721893-cf9b-4712-b421-19d6a59d7cf7","Type":"ContainerStarted","Data":"ea0469a6260fe2f7e46a0cc59f2d0b726e1a5a902adc250a76ba30df3a31ea57"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.448479 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xwq5t" event={"ID":"81869f8b-1f2a-481f-ba1d-8453c0c57555","Type":"ContainerStarted","Data":"7a694abc5b1ac30c52c67bfa8264e3dabea3b245ac89046bb56c6658534e0745"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.463223 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" event={"ID":"0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6","Type":"ContainerDied","Data":"2bccb611d5a4df395a7bcc5354a7464de3dae22a3d2b6d099af750568f4ed1a2"} Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.463278 4975 scope.go:117] "RemoveContainer" containerID="9a00b3b787f9d92056118867718da7f6a6af2210922bd40154084d59de3b5af5" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.463500 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-ntnkr" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.464428 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.370344729 podStartE2EDuration="5.464402113s" podCreationTimestamp="2026-01-22 10:54:50 +0000 UTC" firstStartedPulling="2026-01-22 10:54:51.745402703 +0000 UTC m=+1084.403438813" lastFinishedPulling="2026-01-22 10:54:54.839460087 +0000 UTC m=+1087.497496197" observedRunningTime="2026-01-22 10:54:55.454619356 +0000 UTC m=+1088.112655476" watchObservedRunningTime="2026-01-22 10:54:55.464402113 +0000 UTC m=+1088.122438243" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.484621 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7kqbs" podStartSLOduration=2.370338419 podStartE2EDuration="6.484598134s" podCreationTimestamp="2026-01-22 10:54:49 +0000 UTC" firstStartedPulling="2026-01-22 10:54:50.659540989 +0000 UTC m=+1083.317577099" lastFinishedPulling="2026-01-22 10:54:54.773800704 +0000 UTC m=+1087.431836814" observedRunningTime="2026-01-22 10:54:55.484211304 +0000 UTC m=+1088.142247424" watchObservedRunningTime="2026-01-22 10:54:55.484598134 +0000 UTC m=+1088.142634254" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.501622 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xwq5t" podStartSLOduration=3.501455984 podStartE2EDuration="3.501455984s" podCreationTimestamp="2026-01-22 10:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:54:55.497821325 +0000 UTC m=+1088.155857435" watchObservedRunningTime="2026-01-22 10:54:55.501455984 +0000 UTC m=+1088.159492104" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.543002 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7dw86" podStartSLOduration=5.542982429 podStartE2EDuration="5.542982429s" podCreationTimestamp="2026-01-22 10:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:54:55.53204979 +0000 UTC m=+1088.190085940" watchObservedRunningTime="2026-01-22 10:54:55.542982429 +0000 UTC m=+1088.201018539" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.543277 4975 scope.go:117] "RemoveContainer" containerID="546caae73a01a554cd926a19528dba5dbfd99f6ea51fc37f7a84aee5b04b804b" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.570700 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-psnps"] Jan 22 10:54:55 crc kubenswrapper[4975]: E0122 10:54:55.571069 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc2626f-8d55-44c9-8478-cafd40077961" containerName="mariadb-database-create" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.571088 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc2626f-8d55-44c9-8478-cafd40077961" containerName="mariadb-database-create" Jan 22 10:54:55 crc kubenswrapper[4975]: E0122 10:54:55.571130 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerName="dnsmasq-dns" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.571137 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerName="dnsmasq-dns" Jan 22 10:54:55 crc kubenswrapper[4975]: E0122 10:54:55.571156 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerName="init" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.571164 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerName="init" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.571310 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" containerName="dnsmasq-dns" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.571322 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc2626f-8d55-44c9-8478-cafd40077961" containerName="mariadb-database-create" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.571805 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.588554 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-psnps"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.603887 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-ntnkr"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.610811 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-ntnkr"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.638054 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8v94\" (UniqueName: \"kubernetes.io/projected/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-kube-api-access-l8v94\") pod \"keystone-db-create-psnps\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.638143 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-operator-scripts\") pod \"keystone-db-create-psnps\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.648165 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-eff0-account-create-update-q2t5p"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.656930 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.659831 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.684047 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eff0-account-create-update-q2t5p"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.739472 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwdl\" (UniqueName: \"kubernetes.io/projected/155e5871-28a0-48dc-9bbf-1445a44ad371-kube-api-access-xfwdl\") pod \"keystone-eff0-account-create-update-q2t5p\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.739654 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155e5871-28a0-48dc-9bbf-1445a44ad371-operator-scripts\") pod \"keystone-eff0-account-create-update-q2t5p\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.739738 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8v94\" (UniqueName: \"kubernetes.io/projected/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-kube-api-access-l8v94\") pod \"keystone-db-create-psnps\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.739933 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-operator-scripts\") pod \"keystone-db-create-psnps\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.740496 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-operator-scripts\") pod \"keystone-db-create-psnps\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.755509 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8v94\" (UniqueName: \"kubernetes.io/projected/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-kube-api-access-l8v94\") pod \"keystone-db-create-psnps\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.763489 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6" path="/var/lib/kubelet/pods/0e5cf2cd-d06c-40ee-b9b2-06b1342ae3c6/volumes" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.802044 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b94rh"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.803107 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.823745 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b94rh"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.841983 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hf9w\" (UniqueName: \"kubernetes.io/projected/35572d60-24e6-424c-90a5-3c7566536764-kube-api-access-9hf9w\") pod \"placement-db-create-b94rh\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.842071 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwdl\" (UniqueName: \"kubernetes.io/projected/155e5871-28a0-48dc-9bbf-1445a44ad371-kube-api-access-xfwdl\") pod \"keystone-eff0-account-create-update-q2t5p\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.842102 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35572d60-24e6-424c-90a5-3c7566536764-operator-scripts\") pod \"placement-db-create-b94rh\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.842132 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155e5871-28a0-48dc-9bbf-1445a44ad371-operator-scripts\") pod \"keystone-eff0-account-create-update-q2t5p\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.842877 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155e5871-28a0-48dc-9bbf-1445a44ad371-operator-scripts\") pod \"keystone-eff0-account-create-update-q2t5p\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.856824 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwdl\" (UniqueName: \"kubernetes.io/projected/155e5871-28a0-48dc-9bbf-1445a44ad371-kube-api-access-xfwdl\") pod \"keystone-eff0-account-create-update-q2t5p\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.899742 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnps" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.951993 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1e00-account-create-update-456qq"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.954476 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hf9w\" (UniqueName: \"kubernetes.io/projected/35572d60-24e6-424c-90a5-3c7566536764-kube-api-access-9hf9w\") pod \"placement-db-create-b94rh\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.954595 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35572d60-24e6-424c-90a5-3c7566536764-operator-scripts\") pod \"placement-db-create-b94rh\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.954854 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.955551 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35572d60-24e6-424c-90a5-3c7566536764-operator-scripts\") pod \"placement-db-create-b94rh\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.956943 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.969035 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1e00-account-create-update-456qq"] Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.974727 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hf9w\" (UniqueName: \"kubernetes.io/projected/35572d60-24e6-424c-90a5-3c7566536764-kube-api-access-9hf9w\") pod \"placement-db-create-b94rh\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " pod="openstack/placement-db-create-b94rh" Jan 22 10:54:55 crc kubenswrapper[4975]: I0122 10:54:55.995130 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.055794 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ngbr\" (UniqueName: \"kubernetes.io/projected/d49d3932-3b57-4488-ae1b-3b2f95e6385f-kube-api-access-4ngbr\") pod \"placement-1e00-account-create-update-456qq\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.056002 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49d3932-3b57-4488-ae1b-3b2f95e6385f-operator-scripts\") pod \"placement-1e00-account-create-update-456qq\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.132805 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b94rh" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.158362 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ngbr\" (UniqueName: \"kubernetes.io/projected/d49d3932-3b57-4488-ae1b-3b2f95e6385f-kube-api-access-4ngbr\") pod \"placement-1e00-account-create-update-456qq\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.158478 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49d3932-3b57-4488-ae1b-3b2f95e6385f-operator-scripts\") pod \"placement-1e00-account-create-update-456qq\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.161704 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49d3932-3b57-4488-ae1b-3b2f95e6385f-operator-scripts\") pod \"placement-1e00-account-create-update-456qq\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.176181 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ngbr\" (UniqueName: \"kubernetes.io/projected/d49d3932-3b57-4488-ae1b-3b2f95e6385f-kube-api-access-4ngbr\") pod \"placement-1e00-account-create-update-456qq\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.297073 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.379862 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-psnps"] Jan 22 10:54:56 crc kubenswrapper[4975]: W0122 10:54:56.382318 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9a43fe_c0ea_4f62_bb5d_4a3c5c77ebaf.slice/crio-57cd8c87eb4f9e8c5ad3589448fbf0126c11811c482c45e4959f2d5a53745a4a WatchSource:0}: Error finding container 57cd8c87eb4f9e8c5ad3589448fbf0126c11811c482c45e4959f2d5a53745a4a: Status 404 returned error can't find the container with id 57cd8c87eb4f9e8c5ad3589448fbf0126c11811c482c45e4959f2d5a53745a4a Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.465254 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eff0-account-create-update-q2t5p"] Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.477724 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-psnps" event={"ID":"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf","Type":"ContainerStarted","Data":"57cd8c87eb4f9e8c5ad3589448fbf0126c11811c482c45e4959f2d5a53745a4a"} Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.480228 4975 generic.go:334] "Generic (PLEG): container finished" podID="81869f8b-1f2a-481f-ba1d-8453c0c57555" containerID="aa65bf0422d1f3ad1d38f517328c447c1e5535727f8eac18f932e4a93a8732d4" exitCode=0 Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.481481 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xwq5t" event={"ID":"81869f8b-1f2a-481f-ba1d-8453c0c57555","Type":"ContainerDied","Data":"aa65bf0422d1f3ad1d38f517328c447c1e5535727f8eac18f932e4a93a8732d4"} Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.579118 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b94rh"] Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.765185 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.822237 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1e00-account-create-update-456qq"] Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.875514 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzzjz\" (UniqueName: \"kubernetes.io/projected/16f9507c-123d-4f8d-ae32-c8ffefc98594-kube-api-access-wzzjz\") pod \"16f9507c-123d-4f8d-ae32-c8ffefc98594\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.875607 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9507c-123d-4f8d-ae32-c8ffefc98594-operator-scripts\") pod \"16f9507c-123d-4f8d-ae32-c8ffefc98594\" (UID: \"16f9507c-123d-4f8d-ae32-c8ffefc98594\") " Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.876454 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f9507c-123d-4f8d-ae32-c8ffefc98594-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16f9507c-123d-4f8d-ae32-c8ffefc98594" (UID: "16f9507c-123d-4f8d-ae32-c8ffefc98594"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.881697 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f9507c-123d-4f8d-ae32-c8ffefc98594-kube-api-access-wzzjz" (OuterVolumeSpecName: "kube-api-access-wzzjz") pod "16f9507c-123d-4f8d-ae32-c8ffefc98594" (UID: "16f9507c-123d-4f8d-ae32-c8ffefc98594"). InnerVolumeSpecName "kube-api-access-wzzjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.977315 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzzjz\" (UniqueName: \"kubernetes.io/projected/16f9507c-123d-4f8d-ae32-c8ffefc98594-kube-api-access-wzzjz\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:56 crc kubenswrapper[4975]: I0122 10:54:56.977628 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9507c-123d-4f8d-ae32-c8ffefc98594-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.180623 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:54:57 crc kubenswrapper[4975]: E0122 10:54:57.180931 4975 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:54:57 crc kubenswrapper[4975]: E0122 10:54:57.180973 4975 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:54:57 crc kubenswrapper[4975]: E0122 10:54:57.181048 4975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift podName:258efd08-8841-42a5-bfff-035926fa4a8a nodeName:}" failed. No retries permitted until 2026-01-22 10:55:05.181020971 +0000 UTC m=+1097.839057111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift") pod "swift-storage-0" (UID: "258efd08-8841-42a5-bfff-035926fa4a8a") : configmap "swift-ring-files" not found Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.493413 4975 generic.go:334] "Generic (PLEG): container finished" podID="ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" containerID="bf52444ea72e2c6b386dfbea5730a7b5ce54b1c733c95a62e05867fa81baa174" exitCode=0 Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.493467 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-psnps" event={"ID":"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf","Type":"ContainerDied","Data":"bf52444ea72e2c6b386dfbea5730a7b5ce54b1c733c95a62e05867fa81baa174"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.497582 4975 generic.go:334] "Generic (PLEG): container finished" podID="d49d3932-3b57-4488-ae1b-3b2f95e6385f" containerID="bcd23be3a1cb1871320f315681136f76c91ab653587376c2d442474f19bb5f7d" exitCode=0 Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.497617 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1e00-account-create-update-456qq" event={"ID":"d49d3932-3b57-4488-ae1b-3b2f95e6385f","Type":"ContainerDied","Data":"bcd23be3a1cb1871320f315681136f76c91ab653587376c2d442474f19bb5f7d"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.497650 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1e00-account-create-update-456qq" event={"ID":"d49d3932-3b57-4488-ae1b-3b2f95e6385f","Type":"ContainerStarted","Data":"6801ffce86889eca53d58c38291c668b03e0c0f1db1b5e71f7ab32b7d1b5fbc4"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.503268 4975 generic.go:334] "Generic (PLEG): container finished" podID="35572d60-24e6-424c-90a5-3c7566536764" containerID="dfd40ac8494db26513883464d3fb20725e3df030b8c43ce8b054e4a98b0322fb" exitCode=0 Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.503376 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b94rh" event={"ID":"35572d60-24e6-424c-90a5-3c7566536764","Type":"ContainerDied","Data":"dfd40ac8494db26513883464d3fb20725e3df030b8c43ce8b054e4a98b0322fb"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.503451 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b94rh" event={"ID":"35572d60-24e6-424c-90a5-3c7566536764","Type":"ContainerStarted","Data":"ec53c3d7c1904491efa793fbd8221ea438b381e754b6ddff356f5ea2b06f4521"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.507071 4975 generic.go:334] "Generic (PLEG): container finished" podID="155e5871-28a0-48dc-9bbf-1445a44ad371" containerID="21d56dcce374be4360e8cdd9bb1732a2cf1d049931c93d5488e783434013e169" exitCode=0 Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.507204 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eff0-account-create-update-q2t5p" event={"ID":"155e5871-28a0-48dc-9bbf-1445a44ad371","Type":"ContainerDied","Data":"21d56dcce374be4360e8cdd9bb1732a2cf1d049931c93d5488e783434013e169"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.507253 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eff0-account-create-update-q2t5p" event={"ID":"155e5871-28a0-48dc-9bbf-1445a44ad371","Type":"ContainerStarted","Data":"b47027b98bebeb7bd291c0c9e92870564ac75c2399fa583dd23c4c169e76e95f"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.510708 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ce3e-account-create-update-pqlkf" event={"ID":"16f9507c-123d-4f8d-ae32-c8ffefc98594","Type":"ContainerDied","Data":"7bf63feecdfca18692ceb2214315e2c3235bc3a0712846f5c2430b663a687409"} Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.510782 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf63feecdfca18692ceb2214315e2c3235bc3a0712846f5c2430b663a687409" Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.510907 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ce3e-account-create-update-pqlkf" Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.926141 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.996804 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81869f8b-1f2a-481f-ba1d-8453c0c57555-operator-scripts\") pod \"81869f8b-1f2a-481f-ba1d-8453c0c57555\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.997174 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhl84\" (UniqueName: \"kubernetes.io/projected/81869f8b-1f2a-481f-ba1d-8453c0c57555-kube-api-access-zhl84\") pod \"81869f8b-1f2a-481f-ba1d-8453c0c57555\" (UID: \"81869f8b-1f2a-481f-ba1d-8453c0c57555\") " Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.997213 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81869f8b-1f2a-481f-ba1d-8453c0c57555-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81869f8b-1f2a-481f-ba1d-8453c0c57555" (UID: "81869f8b-1f2a-481f-ba1d-8453c0c57555"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:57 crc kubenswrapper[4975]: I0122 10:54:57.997839 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81869f8b-1f2a-481f-ba1d-8453c0c57555-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:58 crc kubenswrapper[4975]: I0122 10:54:58.023512 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81869f8b-1f2a-481f-ba1d-8453c0c57555-kube-api-access-zhl84" (OuterVolumeSpecName: "kube-api-access-zhl84") pod "81869f8b-1f2a-481f-ba1d-8453c0c57555" (UID: "81869f8b-1f2a-481f-ba1d-8453c0c57555"). InnerVolumeSpecName "kube-api-access-zhl84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:58 crc kubenswrapper[4975]: I0122 10:54:58.099268 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhl84\" (UniqueName: \"kubernetes.io/projected/81869f8b-1f2a-481f-ba1d-8453c0c57555-kube-api-access-zhl84\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:58 crc kubenswrapper[4975]: I0122 10:54:58.526656 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xwq5t" event={"ID":"81869f8b-1f2a-481f-ba1d-8453c0c57555","Type":"ContainerDied","Data":"7a694abc5b1ac30c52c67bfa8264e3dabea3b245ac89046bb56c6658534e0745"} Jan 22 10:54:58 crc kubenswrapper[4975]: I0122 10:54:58.528006 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a694abc5b1ac30c52c67bfa8264e3dabea3b245ac89046bb56c6658534e0745" Jan 22 10:54:58 crc kubenswrapper[4975]: I0122 10:54:58.526829 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xwq5t" Jan 22 10:54:58 crc kubenswrapper[4975]: I0122 10:54:58.951101 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.013086 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ngbr\" (UniqueName: \"kubernetes.io/projected/d49d3932-3b57-4488-ae1b-3b2f95e6385f-kube-api-access-4ngbr\") pod \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.013202 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49d3932-3b57-4488-ae1b-3b2f95e6385f-operator-scripts\") pod \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\" (UID: \"d49d3932-3b57-4488-ae1b-3b2f95e6385f\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.013853 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49d3932-3b57-4488-ae1b-3b2f95e6385f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d49d3932-3b57-4488-ae1b-3b2f95e6385f" (UID: "d49d3932-3b57-4488-ae1b-3b2f95e6385f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.017318 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49d3932-3b57-4488-ae1b-3b2f95e6385f-kube-api-access-4ngbr" (OuterVolumeSpecName: "kube-api-access-4ngbr") pod "d49d3932-3b57-4488-ae1b-3b2f95e6385f" (UID: "d49d3932-3b57-4488-ae1b-3b2f95e6385f"). InnerVolumeSpecName "kube-api-access-4ngbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.077421 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.083011 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnps" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.088652 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b94rh" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.117935 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35572d60-24e6-424c-90a5-3c7566536764-operator-scripts\") pod \"35572d60-24e6-424c-90a5-3c7566536764\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.118055 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-operator-scripts\") pod \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.118740 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" (UID: "ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.118869 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155e5871-28a0-48dc-9bbf-1445a44ad371-operator-scripts\") pod \"155e5871-28a0-48dc-9bbf-1445a44ad371\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.119005 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35572d60-24e6-424c-90a5-3c7566536764-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35572d60-24e6-424c-90a5-3c7566536764" (UID: "35572d60-24e6-424c-90a5-3c7566536764"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.119237 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155e5871-28a0-48dc-9bbf-1445a44ad371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "155e5871-28a0-48dc-9bbf-1445a44ad371" (UID: "155e5871-28a0-48dc-9bbf-1445a44ad371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.119315 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8v94\" (UniqueName: \"kubernetes.io/projected/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-kube-api-access-l8v94\") pod \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\" (UID: \"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.119943 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hf9w\" (UniqueName: \"kubernetes.io/projected/35572d60-24e6-424c-90a5-3c7566536764-kube-api-access-9hf9w\") pod \"35572d60-24e6-424c-90a5-3c7566536764\" (UID: \"35572d60-24e6-424c-90a5-3c7566536764\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.120052 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwdl\" (UniqueName: \"kubernetes.io/projected/155e5871-28a0-48dc-9bbf-1445a44ad371-kube-api-access-xfwdl\") pod \"155e5871-28a0-48dc-9bbf-1445a44ad371\" (UID: \"155e5871-28a0-48dc-9bbf-1445a44ad371\") " Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.120447 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ngbr\" (UniqueName: \"kubernetes.io/projected/d49d3932-3b57-4488-ae1b-3b2f95e6385f-kube-api-access-4ngbr\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.120461 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35572d60-24e6-424c-90a5-3c7566536764-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.120469 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.120478 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49d3932-3b57-4488-ae1b-3b2f95e6385f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.120485 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/155e5871-28a0-48dc-9bbf-1445a44ad371-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.123733 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155e5871-28a0-48dc-9bbf-1445a44ad371-kube-api-access-xfwdl" (OuterVolumeSpecName: "kube-api-access-xfwdl") pod "155e5871-28a0-48dc-9bbf-1445a44ad371" (UID: "155e5871-28a0-48dc-9bbf-1445a44ad371"). InnerVolumeSpecName "kube-api-access-xfwdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.126512 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-kube-api-access-l8v94" (OuterVolumeSpecName: "kube-api-access-l8v94") pod "ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" (UID: "ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf"). InnerVolumeSpecName "kube-api-access-l8v94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.130002 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35572d60-24e6-424c-90a5-3c7566536764-kube-api-access-9hf9w" (OuterVolumeSpecName: "kube-api-access-9hf9w") pod "35572d60-24e6-424c-90a5-3c7566536764" (UID: "35572d60-24e6-424c-90a5-3c7566536764"). InnerVolumeSpecName "kube-api-access-9hf9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.183494 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xwq5t"] Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.189708 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xwq5t"] Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.221822 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8v94\" (UniqueName: \"kubernetes.io/projected/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf-kube-api-access-l8v94\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.221849 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hf9w\" (UniqueName: \"kubernetes.io/projected/35572d60-24e6-424c-90a5-3c7566536764-kube-api-access-9hf9w\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.221861 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwdl\" (UniqueName: \"kubernetes.io/projected/155e5871-28a0-48dc-9bbf-1445a44ad371-kube-api-access-xfwdl\") on node \"crc\" DevicePath \"\"" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.538564 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-psnps" event={"ID":"ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf","Type":"ContainerDied","Data":"57cd8c87eb4f9e8c5ad3589448fbf0126c11811c482c45e4959f2d5a53745a4a"} Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.538609 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnps" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.538618 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57cd8c87eb4f9e8c5ad3589448fbf0126c11811c482c45e4959f2d5a53745a4a" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.541152 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1e00-account-create-update-456qq" event={"ID":"d49d3932-3b57-4488-ae1b-3b2f95e6385f","Type":"ContainerDied","Data":"6801ffce86889eca53d58c38291c668b03e0c0f1db1b5e71f7ab32b7d1b5fbc4"} Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.541167 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e00-account-create-update-456qq" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.541271 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6801ffce86889eca53d58c38291c668b03e0c0f1db1b5e71f7ab32b7d1b5fbc4" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.543053 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b94rh" event={"ID":"35572d60-24e6-424c-90a5-3c7566536764","Type":"ContainerDied","Data":"ec53c3d7c1904491efa793fbd8221ea438b381e754b6ddff356f5ea2b06f4521"} Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.543090 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec53c3d7c1904491efa793fbd8221ea438b381e754b6ddff356f5ea2b06f4521" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.543145 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b94rh" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.545653 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eff0-account-create-update-q2t5p" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.545566 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eff0-account-create-update-q2t5p" event={"ID":"155e5871-28a0-48dc-9bbf-1445a44ad371","Type":"ContainerDied","Data":"b47027b98bebeb7bd291c0c9e92870564ac75c2399fa583dd23c4c169e76e95f"} Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.558131 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b47027b98bebeb7bd291c0c9e92870564ac75c2399fa583dd23c4c169e76e95f" Jan 22 10:54:59 crc kubenswrapper[4975]: I0122 10:54:59.772654 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81869f8b-1f2a-481f-ba1d-8453c0c57555" path="/var/lib/kubelet/pods/81869f8b-1f2a-481f-ba1d-8453c0c57555/volumes" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.108664 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.223187 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pb6s2"] Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.223473 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="dnsmasq-dns" containerID="cri-o://9b138b79880e6f8c2756a1d3f1a97010b96d3cc9ec5fcdef20a808ae5f71f4cd" gracePeriod=10 Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.503990 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4hdn6"] Jan 22 10:55:01 crc kubenswrapper[4975]: E0122 10:55:01.504368 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35572d60-24e6-424c-90a5-3c7566536764" containerName="mariadb-database-create" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504387 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="35572d60-24e6-424c-90a5-3c7566536764" containerName="mariadb-database-create" Jan 22 10:55:01 crc kubenswrapper[4975]: E0122 10:55:01.504408 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155e5871-28a0-48dc-9bbf-1445a44ad371" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504417 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="155e5871-28a0-48dc-9bbf-1445a44ad371" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: E0122 10:55:01.504428 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f9507c-123d-4f8d-ae32-c8ffefc98594" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504437 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f9507c-123d-4f8d-ae32-c8ffefc98594" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: E0122 10:55:01.504456 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49d3932-3b57-4488-ae1b-3b2f95e6385f" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504464 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49d3932-3b57-4488-ae1b-3b2f95e6385f" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: E0122 10:55:01.504485 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81869f8b-1f2a-481f-ba1d-8453c0c57555" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504493 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="81869f8b-1f2a-481f-ba1d-8453c0c57555" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: E0122 10:55:01.504508 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" containerName="mariadb-database-create" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504516 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" containerName="mariadb-database-create" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504698 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="81869f8b-1f2a-481f-ba1d-8453c0c57555" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504713 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49d3932-3b57-4488-ae1b-3b2f95e6385f" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504726 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" containerName="mariadb-database-create" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504739 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f9507c-123d-4f8d-ae32-c8ffefc98594" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504752 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="155e5871-28a0-48dc-9bbf-1445a44ad371" containerName="mariadb-account-create-update" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.504760 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="35572d60-24e6-424c-90a5-3c7566536764" containerName="mariadb-database-create" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.505353 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.508148 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pqwbk" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.508364 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.511042 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4hdn6"] Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.562388 4975 generic.go:334] "Generic (PLEG): container finished" podID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerID="9b138b79880e6f8c2756a1d3f1a97010b96d3cc9ec5fcdef20a808ae5f71f4cd" exitCode=0 Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.562428 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" event={"ID":"18481aeb-1cfa-4bd1-af3a-d4daea62b647","Type":"ContainerDied","Data":"9b138b79880e6f8c2756a1d3f1a97010b96d3cc9ec5fcdef20a808ae5f71f4cd"} Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.568367 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c0f9a87e-8307-44d7-a333-8eb76169882b-kube-api-access-5wdq5\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.568413 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-config-data\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.568481 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-db-sync-config-data\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.568497 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-combined-ca-bundle\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.670189 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-db-sync-config-data\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.670256 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-combined-ca-bundle\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.670441 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c0f9a87e-8307-44d7-a333-8eb76169882b-kube-api-access-5wdq5\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.670508 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-config-data\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.676557 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-db-sync-config-data\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.676774 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-combined-ca-bundle\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.678049 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-config-data\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.689549 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c0f9a87e-8307-44d7-a333-8eb76169882b-kube-api-access-5wdq5\") pod \"glance-db-sync-4hdn6\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.836463 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:01 crc kubenswrapper[4975]: I0122 10:55:01.852374 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.96:5353: connect: connection refused" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.159175 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.280155 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4dw2\" (UniqueName: \"kubernetes.io/projected/18481aeb-1cfa-4bd1-af3a-d4daea62b647-kube-api-access-r4dw2\") pod \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.280665 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-dns-svc\") pod \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.280765 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-config\") pod \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\" (UID: \"18481aeb-1cfa-4bd1-af3a-d4daea62b647\") " Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.284954 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18481aeb-1cfa-4bd1-af3a-d4daea62b647-kube-api-access-r4dw2" (OuterVolumeSpecName: "kube-api-access-r4dw2") pod "18481aeb-1cfa-4bd1-af3a-d4daea62b647" (UID: "18481aeb-1cfa-4bd1-af3a-d4daea62b647"). InnerVolumeSpecName "kube-api-access-r4dw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.314373 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18481aeb-1cfa-4bd1-af3a-d4daea62b647" (UID: "18481aeb-1cfa-4bd1-af3a-d4daea62b647"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.357163 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-config" (OuterVolumeSpecName: "config") pod "18481aeb-1cfa-4bd1-af3a-d4daea62b647" (UID: "18481aeb-1cfa-4bd1-af3a-d4daea62b647"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.383421 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.383449 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4dw2\" (UniqueName: \"kubernetes.io/projected/18481aeb-1cfa-4bd1-af3a-d4daea62b647-kube-api-access-r4dw2\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.383463 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18481aeb-1cfa-4bd1-af3a-d4daea62b647-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.452943 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4hdn6"] Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.571451 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" event={"ID":"18481aeb-1cfa-4bd1-af3a-d4daea62b647","Type":"ContainerDied","Data":"b76d0ddbc107002f44fcdf6f74bb6ecdd838bc93649367e42f3353a3a53e2e9e"} Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.571766 4975 scope.go:117] "RemoveContainer" containerID="9b138b79880e6f8c2756a1d3f1a97010b96d3cc9ec5fcdef20a808ae5f71f4cd" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.571512 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pb6s2" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.573507 4975 generic.go:334] "Generic (PLEG): container finished" podID="ee721893-cf9b-4712-b421-19d6a59d7cf7" containerID="ea0469a6260fe2f7e46a0cc59f2d0b726e1a5a902adc250a76ba30df3a31ea57" exitCode=0 Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.573579 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kqbs" event={"ID":"ee721893-cf9b-4712-b421-19d6a59d7cf7","Type":"ContainerDied","Data":"ea0469a6260fe2f7e46a0cc59f2d0b726e1a5a902adc250a76ba30df3a31ea57"} Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.575400 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4hdn6" event={"ID":"c0f9a87e-8307-44d7-a333-8eb76169882b","Type":"ContainerStarted","Data":"d306e617d2647c54fc60d2efee4866bfaea5482f81604ba29f477c193c9010df"} Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.605523 4975 scope.go:117] "RemoveContainer" containerID="b3bcf9b93932253af09efd34c2a2ee850039b12607c02033f8d9e1052582282d" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.628486 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pb6s2"] Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.634373 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pb6s2"] Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.884113 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qq9t6"] Jan 22 10:55:02 crc kubenswrapper[4975]: E0122 10:55:02.884772 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="dnsmasq-dns" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.884816 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="dnsmasq-dns" Jan 22 10:55:02 crc kubenswrapper[4975]: E0122 10:55:02.884860 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="init" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.884877 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="init" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.885268 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" containerName="dnsmasq-dns" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.886450 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.888950 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.892462 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qq9t6"] Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.994632 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qx5\" (UniqueName: \"kubernetes.io/projected/7b002772-0877-4d68-9b44-66c23b9453e9-kube-api-access-j2qx5\") pod \"root-account-create-update-qq9t6\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:02 crc kubenswrapper[4975]: I0122 10:55:02.994775 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b002772-0877-4d68-9b44-66c23b9453e9-operator-scripts\") pod \"root-account-create-update-qq9t6\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.096226 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b002772-0877-4d68-9b44-66c23b9453e9-operator-scripts\") pod \"root-account-create-update-qq9t6\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.096529 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qx5\" (UniqueName: \"kubernetes.io/projected/7b002772-0877-4d68-9b44-66c23b9453e9-kube-api-access-j2qx5\") pod \"root-account-create-update-qq9t6\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.097277 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b002772-0877-4d68-9b44-66c23b9453e9-operator-scripts\") pod \"root-account-create-update-qq9t6\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.132850 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qx5\" (UniqueName: \"kubernetes.io/projected/7b002772-0877-4d68-9b44-66c23b9453e9-kube-api-access-j2qx5\") pod \"root-account-create-update-qq9t6\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.220976 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.467969 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qq9t6"] Jan 22 10:55:03 crc kubenswrapper[4975]: W0122 10:55:03.479397 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b002772_0877_4d68_9b44_66c23b9453e9.slice/crio-5d5b9def53a54f73b637d7f42b068ca00b2d1cc2079817fda5440a594d92ecf2 WatchSource:0}: Error finding container 5d5b9def53a54f73b637d7f42b068ca00b2d1cc2079817fda5440a594d92ecf2: Status 404 returned error can't find the container with id 5d5b9def53a54f73b637d7f42b068ca00b2d1cc2079817fda5440a594d92ecf2 Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.587041 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qq9t6" event={"ID":"7b002772-0877-4d68-9b44-66c23b9453e9","Type":"ContainerStarted","Data":"5d5b9def53a54f73b637d7f42b068ca00b2d1cc2079817fda5440a594d92ecf2"} Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.769799 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18481aeb-1cfa-4bd1-af3a-d4daea62b647" path="/var/lib/kubelet/pods/18481aeb-1cfa-4bd1-af3a-d4daea62b647/volumes" Jan 22 10:55:03 crc kubenswrapper[4975]: I0122 10:55:03.882994 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:55:03 crc kubenswrapper[4975]: E0122 10:55:03.930687 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b002772_0877_4d68_9b44_66c23b9453e9.slice/crio-conmon-b647f21bef594ae123a6823ccf63a27a25ce7935528764215977800760a8b8d7.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.020503 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-swiftconf\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.020592 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-combined-ca-bundle\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.020784 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ee721893-cf9b-4712-b421-19d6a59d7cf7-etc-swift\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.020828 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-scripts\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.020920 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zjfl\" (UniqueName: \"kubernetes.io/projected/ee721893-cf9b-4712-b421-19d6a59d7cf7-kube-api-access-2zjfl\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.020968 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-dispersionconf\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.021017 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-ring-data-devices\") pod \"ee721893-cf9b-4712-b421-19d6a59d7cf7\" (UID: \"ee721893-cf9b-4712-b421-19d6a59d7cf7\") " Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.021813 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee721893-cf9b-4712-b421-19d6a59d7cf7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.022570 4975 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ee721893-cf9b-4712-b421-19d6a59d7cf7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.023377 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.026645 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee721893-cf9b-4712-b421-19d6a59d7cf7-kube-api-access-2zjfl" (OuterVolumeSpecName: "kube-api-access-2zjfl") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "kube-api-access-2zjfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.029767 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.040816 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.042512 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.054892 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-scripts" (OuterVolumeSpecName: "scripts") pod "ee721893-cf9b-4712-b421-19d6a59d7cf7" (UID: "ee721893-cf9b-4712-b421-19d6a59d7cf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.124118 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zjfl\" (UniqueName: \"kubernetes.io/projected/ee721893-cf9b-4712-b421-19d6a59d7cf7-kube-api-access-2zjfl\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.124352 4975 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.124364 4975 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.124373 4975 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.124382 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee721893-cf9b-4712-b421-19d6a59d7cf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.124390 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee721893-cf9b-4712-b421-19d6a59d7cf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.597514 4975 generic.go:334] "Generic (PLEG): container finished" podID="7b002772-0877-4d68-9b44-66c23b9453e9" containerID="b647f21bef594ae123a6823ccf63a27a25ce7935528764215977800760a8b8d7" exitCode=0 Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.597595 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qq9t6" event={"ID":"7b002772-0877-4d68-9b44-66c23b9453e9","Type":"ContainerDied","Data":"b647f21bef594ae123a6823ccf63a27a25ce7935528764215977800760a8b8d7"} Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.600124 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kqbs" event={"ID":"ee721893-cf9b-4712-b421-19d6a59d7cf7","Type":"ContainerDied","Data":"07d0cca3e77f9dda62785ecdc089edb29d57be2e222716757a1f4911edbaab10"} Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.600149 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d0cca3e77f9dda62785ecdc089edb29d57be2e222716757a1f4911edbaab10" Jan 22 10:55:04 crc kubenswrapper[4975]: I0122 10:55:04.600192 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kqbs" Jan 22 10:55:05 crc kubenswrapper[4975]: I0122 10:55:05.244174 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:55:05 crc kubenswrapper[4975]: I0122 10:55:05.251256 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/258efd08-8841-42a5-bfff-035926fa4a8a-etc-swift\") pod \"swift-storage-0\" (UID: \"258efd08-8841-42a5-bfff-035926fa4a8a\") " pod="openstack/swift-storage-0" Jan 22 10:55:05 crc kubenswrapper[4975]: I0122 10:55:05.482594 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 10:55:05 crc kubenswrapper[4975]: I0122 10:55:05.999221 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.055888 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.062868 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b002772-0877-4d68-9b44-66c23b9453e9-operator-scripts\") pod \"7b002772-0877-4d68-9b44-66c23b9453e9\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.062946 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2qx5\" (UniqueName: \"kubernetes.io/projected/7b002772-0877-4d68-9b44-66c23b9453e9-kube-api-access-j2qx5\") pod \"7b002772-0877-4d68-9b44-66c23b9453e9\" (UID: \"7b002772-0877-4d68-9b44-66c23b9453e9\") " Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.063782 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b002772-0877-4d68-9b44-66c23b9453e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b002772-0877-4d68-9b44-66c23b9453e9" (UID: "7b002772-0877-4d68-9b44-66c23b9453e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.068110 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b002772-0877-4d68-9b44-66c23b9453e9-kube-api-access-j2qx5" (OuterVolumeSpecName: "kube-api-access-j2qx5") pod "7b002772-0877-4d68-9b44-66c23b9453e9" (UID: "7b002772-0877-4d68-9b44-66c23b9453e9"). InnerVolumeSpecName "kube-api-access-j2qx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.164597 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b002772-0877-4d68-9b44-66c23b9453e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.164831 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2qx5\" (UniqueName: \"kubernetes.io/projected/7b002772-0877-4d68-9b44-66c23b9453e9-kube-api-access-j2qx5\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.274516 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.588034 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mj5f2" podUID="47de946e-512c-49a6-b633-c3c1640fb4bb" containerName="ovn-controller" probeResult="failure" output=< Jan 22 10:55:06 crc kubenswrapper[4975]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 10:55:06 crc kubenswrapper[4975]: > Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.635827 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"44ecc867871a37ac1d32575aff6ed171882d2567a9d81774ced3894824eeb79b"} Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.638225 4975 generic.go:334] "Generic (PLEG): container finished" podID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerID="5602f3dd0b6da7296455d135a004b394e4a988770a31398c9a509358b5a9efde" exitCode=0 Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.638312 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9","Type":"ContainerDied","Data":"5602f3dd0b6da7296455d135a004b394e4a988770a31398c9a509358b5a9efde"} Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.645103 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qq9t6" event={"ID":"7b002772-0877-4d68-9b44-66c23b9453e9","Type":"ContainerDied","Data":"5d5b9def53a54f73b637d7f42b068ca00b2d1cc2079817fda5440a594d92ecf2"} Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.645154 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d5b9def53a54f73b637d7f42b068ca00b2d1cc2079817fda5440a594d92ecf2" Jan 22 10:55:06 crc kubenswrapper[4975]: I0122 10:55:06.645226 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qq9t6" Jan 22 10:55:07 crc kubenswrapper[4975]: I0122 10:55:07.652783 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9","Type":"ContainerStarted","Data":"ecd6933674a3deda52cfee43ffd08f5bd0aabd7a94ccdfb387eb104a02fc2dba"} Jan 22 10:55:07 crc kubenswrapper[4975]: I0122 10:55:07.654488 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"872c5b6d0e25d8e4a12eda9062315dd0b84c3f5c0165c82d0a72cc723eafaa3b"} Jan 22 10:55:07 crc kubenswrapper[4975]: I0122 10:55:07.654521 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"f2406f2cb395d79fe73b228cb86657577abacbbde54431256cea0cd7c4e87914"} Jan 22 10:55:07 crc kubenswrapper[4975]: I0122 10:55:07.781633 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.543053317 podStartE2EDuration="56.781614188s" podCreationTimestamp="2026-01-22 10:54:11 +0000 UTC" firstStartedPulling="2026-01-22 10:54:21.899079961 +0000 UTC m=+1054.557116091" lastFinishedPulling="2026-01-22 10:54:32.137640852 +0000 UTC m=+1064.795676962" observedRunningTime="2026-01-22 10:55:07.679683205 +0000 UTC m=+1100.337719325" watchObservedRunningTime="2026-01-22 10:55:07.781614188 +0000 UTC m=+1100.439650298" Jan 22 10:55:08 crc kubenswrapper[4975]: I0122 10:55:08.666373 4975 generic.go:334] "Generic (PLEG): container finished" podID="be23c045-c637-4832-8944-ed6250b0d8b3" containerID="b572079d5df8d0dd6fc8e50919ce231fd9afd70997e377e23ee67f64c9525350" exitCode=0 Jan 22 10:55:08 crc kubenswrapper[4975]: I0122 10:55:08.666438 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"be23c045-c637-4832-8944-ed6250b0d8b3","Type":"ContainerDied","Data":"b572079d5df8d0dd6fc8e50919ce231fd9afd70997e377e23ee67f64c9525350"} Jan 22 10:55:09 crc kubenswrapper[4975]: I0122 10:55:09.198154 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qq9t6"] Jan 22 10:55:09 crc kubenswrapper[4975]: I0122 10:55:09.204474 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qq9t6"] Jan 22 10:55:09 crc kubenswrapper[4975]: I0122 10:55:09.767743 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b002772-0877-4d68-9b44-66c23b9453e9" path="/var/lib/kubelet/pods/7b002772-0877-4d68-9b44-66c23b9453e9/volumes" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.584237 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mj5f2" podUID="47de946e-512c-49a6-b633-c3c1640fb4bb" containerName="ovn-controller" probeResult="failure" output=< Jan 22 10:55:11 crc kubenswrapper[4975]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 10:55:11 crc kubenswrapper[4975]: > Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.637524 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.648789 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jqvh9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.860164 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mj5f2-config-fbbm9"] Jan 22 10:55:11 crc kubenswrapper[4975]: E0122 10:55:11.861279 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b002772-0877-4d68-9b44-66c23b9453e9" containerName="mariadb-account-create-update" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.861296 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b002772-0877-4d68-9b44-66c23b9453e9" containerName="mariadb-account-create-update" Jan 22 10:55:11 crc kubenswrapper[4975]: E0122 10:55:11.861308 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee721893-cf9b-4712-b421-19d6a59d7cf7" containerName="swift-ring-rebalance" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.861315 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee721893-cf9b-4712-b421-19d6a59d7cf7" containerName="swift-ring-rebalance" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.861491 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b002772-0877-4d68-9b44-66c23b9453e9" containerName="mariadb-account-create-update" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.861506 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee721893-cf9b-4712-b421-19d6a59d7cf7" containerName="swift-ring-rebalance" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.861985 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.864803 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.867421 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-scripts\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.867477 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run-ovn\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.867501 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-additional-scripts\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.867558 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-log-ovn\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.867710 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.867867 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drpr\" (UniqueName: \"kubernetes.io/projected/a351090f-f5d7-4d4d-8339-94de169c4b9d-kube-api-access-4drpr\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.873554 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mj5f2-config-fbbm9"] Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969196 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-scripts\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969285 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run-ovn\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969318 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-additional-scripts\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969443 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-log-ovn\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969474 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969536 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4drpr\" (UniqueName: \"kubernetes.io/projected/a351090f-f5d7-4d4d-8339-94de169c4b9d-kube-api-access-4drpr\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969681 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-log-ovn\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969688 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run-ovn\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.969744 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.970393 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-additional-scripts\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.971139 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-scripts\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:11 crc kubenswrapper[4975]: I0122 10:55:11.989160 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4drpr\" (UniqueName: \"kubernetes.io/projected/a351090f-f5d7-4d4d-8339-94de169c4b9d-kube-api-access-4drpr\") pod \"ovn-controller-mj5f2-config-fbbm9\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:12 crc kubenswrapper[4975]: I0122 10:55:12.226060 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:13 crc kubenswrapper[4975]: I0122 10:55:13.127140 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.249072 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jdx5j"] Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.257268 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jdx5j"] Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.257572 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.265713 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.410213 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7feaf89-c705-432e-bbd4-a5ee64d7c971-operator-scripts\") pod \"root-account-create-update-jdx5j\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.410388 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/f7feaf89-c705-432e-bbd4-a5ee64d7c971-kube-api-access-dc7d9\") pod \"root-account-create-update-jdx5j\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.511777 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7feaf89-c705-432e-bbd4-a5ee64d7c971-operator-scripts\") pod \"root-account-create-update-jdx5j\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.511892 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/f7feaf89-c705-432e-bbd4-a5ee64d7c971-kube-api-access-dc7d9\") pod \"root-account-create-update-jdx5j\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.512895 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7feaf89-c705-432e-bbd4-a5ee64d7c971-operator-scripts\") pod \"root-account-create-update-jdx5j\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.530356 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/f7feaf89-c705-432e-bbd4-a5ee64d7c971-kube-api-access-dc7d9\") pod \"root-account-create-update-jdx5j\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:14 crc kubenswrapper[4975]: I0122 10:55:14.581467 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:16 crc kubenswrapper[4975]: W0122 10:55:16.005600 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7feaf89_c705_432e_bbd4_a5ee64d7c971.slice/crio-2f6c276e66024cc1fb443d107c0ca2836b0253ffb97d15999a14b3aca3052b0a WatchSource:0}: Error finding container 2f6c276e66024cc1fb443d107c0ca2836b0253ffb97d15999a14b3aca3052b0a: Status 404 returned error can't find the container with id 2f6c276e66024cc1fb443d107c0ca2836b0253ffb97d15999a14b3aca3052b0a Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.015322 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jdx5j"] Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.073559 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mj5f2-config-fbbm9"] Jan 22 10:55:16 crc kubenswrapper[4975]: W0122 10:55:16.080684 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda351090f_f5d7_4d4d_8339_94de169c4b9d.slice/crio-ce4d8776e2253b49eb31714577700f95fc8cc303257520cc5f0724f43f6f777b WatchSource:0}: Error finding container ce4d8776e2253b49eb31714577700f95fc8cc303257520cc5f0724f43f6f777b: Status 404 returned error can't find the container with id ce4d8776e2253b49eb31714577700f95fc8cc303257520cc5f0724f43f6f777b Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.591234 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mj5f2" Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.739783 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"be23c045-c637-4832-8944-ed6250b0d8b3","Type":"ContainerStarted","Data":"744307bfc9991ae8786e9736a8a895cc0d7376574525d240d42e905082fe5a8c"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.740072 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.743871 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4hdn6" event={"ID":"c0f9a87e-8307-44d7-a333-8eb76169882b","Type":"ContainerStarted","Data":"c9c7dd51b729be2c9be43e6649a6b76a8b411cfc2c3af80cdc9174f59372196e"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.747389 4975 generic.go:334] "Generic (PLEG): container finished" podID="f7feaf89-c705-432e-bbd4-a5ee64d7c971" containerID="a6938f1c38f78389ce0b30b55067b88b256f1eaceec9b54bf40ad3e84a2175f0" exitCode=0 Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.747481 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jdx5j" event={"ID":"f7feaf89-c705-432e-bbd4-a5ee64d7c971","Type":"ContainerDied","Data":"a6938f1c38f78389ce0b30b55067b88b256f1eaceec9b54bf40ad3e84a2175f0"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.747519 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jdx5j" event={"ID":"f7feaf89-c705-432e-bbd4-a5ee64d7c971","Type":"ContainerStarted","Data":"2f6c276e66024cc1fb443d107c0ca2836b0253ffb97d15999a14b3aca3052b0a"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.755526 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"b654b67206f2d8499ec9fbdeabbd7a9c58a047edabfa22c3f6d772cb9cf17055"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.755592 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"f300bea7b434e8b63d0255cae3066f09df86680653942b8d631890258fabf489"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.759133 4975 generic.go:334] "Generic (PLEG): container finished" podID="a351090f-f5d7-4d4d-8339-94de169c4b9d" containerID="7f6c483d627120d86189c836db318cf572a7bb52c27ffba7e59b0671a320cfb0" exitCode=0 Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.759193 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mj5f2-config-fbbm9" event={"ID":"a351090f-f5d7-4d4d-8339-94de169c4b9d","Type":"ContainerDied","Data":"7f6c483d627120d86189c836db318cf572a7bb52c27ffba7e59b0671a320cfb0"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.759226 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mj5f2-config-fbbm9" event={"ID":"a351090f-f5d7-4d4d-8339-94de169c4b9d","Type":"ContainerStarted","Data":"ce4d8776e2253b49eb31714577700f95fc8cc303257520cc5f0724f43f6f777b"} Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.769453 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.768310884 podStartE2EDuration="1m5.769433948s" podCreationTimestamp="2026-01-22 10:54:11 +0000 UTC" firstStartedPulling="2026-01-22 10:54:28.13881496 +0000 UTC m=+1060.796851070" lastFinishedPulling="2026-01-22 10:54:32.139938024 +0000 UTC m=+1064.797974134" observedRunningTime="2026-01-22 10:55:16.768260169 +0000 UTC m=+1109.426296269" watchObservedRunningTime="2026-01-22 10:55:16.769433948 +0000 UTC m=+1109.427470058" Jan 22 10:55:16 crc kubenswrapper[4975]: I0122 10:55:16.799440 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4hdn6" podStartSLOduration=2.621274034 podStartE2EDuration="15.799423965s" podCreationTimestamp="2026-01-22 10:55:01 +0000 UTC" firstStartedPulling="2026-01-22 10:55:02.453559346 +0000 UTC m=+1095.111595456" lastFinishedPulling="2026-01-22 10:55:15.631709277 +0000 UTC m=+1108.289745387" observedRunningTime="2026-01-22 10:55:16.793353738 +0000 UTC m=+1109.451389848" watchObservedRunningTime="2026-01-22 10:55:16.799423965 +0000 UTC m=+1109.457460075" Jan 22 10:55:17 crc kubenswrapper[4975]: I0122 10:55:17.788435 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"111e6850b0b01e0dd81f29459ce3bc37760b42649a1526ed66618765d09c7016"} Jan 22 10:55:17 crc kubenswrapper[4975]: I0122 10:55:17.790076 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"5768eeda0f9eaa5f1ce6e30248676eac7a77dc6647387768fee9e4186ab2a492"} Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.124750 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.129386 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.273957 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run\") pod \"a351090f-f5d7-4d4d-8339-94de169c4b9d\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274006 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drpr\" (UniqueName: \"kubernetes.io/projected/a351090f-f5d7-4d4d-8339-94de169c4b9d-kube-api-access-4drpr\") pod \"a351090f-f5d7-4d4d-8339-94de169c4b9d\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274047 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-scripts\") pod \"a351090f-f5d7-4d4d-8339-94de169c4b9d\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274072 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7feaf89-c705-432e-bbd4-a5ee64d7c971-operator-scripts\") pod \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274092 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-log-ovn\") pod \"a351090f-f5d7-4d4d-8339-94de169c4b9d\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274108 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run-ovn\") pod \"a351090f-f5d7-4d4d-8339-94de169c4b9d\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274125 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-additional-scripts\") pod \"a351090f-f5d7-4d4d-8339-94de169c4b9d\" (UID: \"a351090f-f5d7-4d4d-8339-94de169c4b9d\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.274185 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/f7feaf89-c705-432e-bbd4-a5ee64d7c971-kube-api-access-dc7d9\") pod \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\" (UID: \"f7feaf89-c705-432e-bbd4-a5ee64d7c971\") " Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.275027 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run" (OuterVolumeSpecName: "var-run") pod "a351090f-f5d7-4d4d-8339-94de169c4b9d" (UID: "a351090f-f5d7-4d4d-8339-94de169c4b9d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.275354 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a351090f-f5d7-4d4d-8339-94de169c4b9d" (UID: "a351090f-f5d7-4d4d-8339-94de169c4b9d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.275439 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a351090f-f5d7-4d4d-8339-94de169c4b9d" (UID: "a351090f-f5d7-4d4d-8339-94de169c4b9d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.275650 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7feaf89-c705-432e-bbd4-a5ee64d7c971-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7feaf89-c705-432e-bbd4-a5ee64d7c971" (UID: "f7feaf89-c705-432e-bbd4-a5ee64d7c971"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.276122 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a351090f-f5d7-4d4d-8339-94de169c4b9d" (UID: "a351090f-f5d7-4d4d-8339-94de169c4b9d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.276519 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-scripts" (OuterVolumeSpecName: "scripts") pod "a351090f-f5d7-4d4d-8339-94de169c4b9d" (UID: "a351090f-f5d7-4d4d-8339-94de169c4b9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.281797 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a351090f-f5d7-4d4d-8339-94de169c4b9d-kube-api-access-4drpr" (OuterVolumeSpecName: "kube-api-access-4drpr") pod "a351090f-f5d7-4d4d-8339-94de169c4b9d" (UID: "a351090f-f5d7-4d4d-8339-94de169c4b9d"). InnerVolumeSpecName "kube-api-access-4drpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.282184 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7feaf89-c705-432e-bbd4-a5ee64d7c971-kube-api-access-dc7d9" (OuterVolumeSpecName: "kube-api-access-dc7d9") pod "f7feaf89-c705-432e-bbd4-a5ee64d7c971" (UID: "f7feaf89-c705-432e-bbd4-a5ee64d7c971"). InnerVolumeSpecName "kube-api-access-dc7d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376567 4975 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376614 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4drpr\" (UniqueName: \"kubernetes.io/projected/a351090f-f5d7-4d4d-8339-94de169c4b9d-kube-api-access-4drpr\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376629 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376641 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7feaf89-c705-432e-bbd4-a5ee64d7c971-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376655 4975 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376667 4975 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a351090f-f5d7-4d4d-8339-94de169c4b9d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376679 4975 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a351090f-f5d7-4d4d-8339-94de169c4b9d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.376690 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/f7feaf89-c705-432e-bbd4-a5ee64d7c971-kube-api-access-dc7d9\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.795714 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mj5f2-config-fbbm9" event={"ID":"a351090f-f5d7-4d4d-8339-94de169c4b9d","Type":"ContainerDied","Data":"ce4d8776e2253b49eb31714577700f95fc8cc303257520cc5f0724f43f6f777b"} Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.795778 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce4d8776e2253b49eb31714577700f95fc8cc303257520cc5f0724f43f6f777b" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.795745 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mj5f2-config-fbbm9" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.797715 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jdx5j" event={"ID":"f7feaf89-c705-432e-bbd4-a5ee64d7c971","Type":"ContainerDied","Data":"2f6c276e66024cc1fb443d107c0ca2836b0253ffb97d15999a14b3aca3052b0a"} Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.797760 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f6c276e66024cc1fb443d107c0ca2836b0253ffb97d15999a14b3aca3052b0a" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.797773 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jdx5j" Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.800788 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"c545937d7185f4be490ff9887721cfb20295581160f1b4a7863876c794c54bcc"} Jan 22 10:55:18 crc kubenswrapper[4975]: I0122 10:55:18.800833 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"5c766610f97f5604108e03bbd1a92bff598351c6f152df610d178e56b8bed821"} Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.231843 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mj5f2-config-fbbm9"] Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.237746 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mj5f2-config-fbbm9"] Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.781292 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a351090f-f5d7-4d4d-8339-94de169c4b9d" path="/var/lib/kubelet/pods/a351090f-f5d7-4d4d-8339-94de169c4b9d/volumes" Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.816457 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"1b031c61652915886864af268491dedb973d96bb982fd9319e85aeef95b2657f"} Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.816491 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"fcc9e01a6636675241bc0c0144260e3199a6ba2179f57696389661afd8bf5acf"} Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.816501 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"95f51646afc4a6c109c75807768e3ed12d75bb15472c47aa0f6ff251b43e3807"} Jan 22 10:55:19 crc kubenswrapper[4975]: I0122 10:55:19.816508 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"be9bb158262350cee86d33b7e9ab05760dcc97f25b1db08e55cb6bf7527a39ba"} Jan 22 10:55:20 crc kubenswrapper[4975]: I0122 10:55:20.828879 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"5231c02cfab41af5632266367ce58ab06eaafea75120e3cebd42a7782940e80a"} Jan 22 10:55:20 crc kubenswrapper[4975]: I0122 10:55:20.829510 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"b141c5c53078e2af1d23545e38ff73f957bd0f41711b564689d9c56f3e9ca56d"} Jan 22 10:55:20 crc kubenswrapper[4975]: I0122 10:55:20.829528 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"258efd08-8841-42a5-bfff-035926fa4a8a","Type":"ContainerStarted","Data":"8dc1cde6792499dc1a7864ae1927a4750073fe514b8a938841681e641cdfdcce"} Jan 22 10:55:20 crc kubenswrapper[4975]: I0122 10:55:20.880295 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.920713993 podStartE2EDuration="32.880272085s" podCreationTimestamp="2026-01-22 10:54:48 +0000 UTC" firstStartedPulling="2026-01-22 10:55:06.057371831 +0000 UTC m=+1098.715407941" lastFinishedPulling="2026-01-22 10:55:19.016929923 +0000 UTC m=+1111.674966033" observedRunningTime="2026-01-22 10:55:20.86853642 +0000 UTC m=+1113.526572570" watchObservedRunningTime="2026-01-22 10:55:20.880272085 +0000 UTC m=+1113.538308205" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.214786 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-pnnfb"] Jan 22 10:55:21 crc kubenswrapper[4975]: E0122 10:55:21.215283 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7feaf89-c705-432e-bbd4-a5ee64d7c971" containerName="mariadb-account-create-update" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.215296 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7feaf89-c705-432e-bbd4-a5ee64d7c971" containerName="mariadb-account-create-update" Jan 22 10:55:21 crc kubenswrapper[4975]: E0122 10:55:21.215308 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a351090f-f5d7-4d4d-8339-94de169c4b9d" containerName="ovn-config" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.215314 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a351090f-f5d7-4d4d-8339-94de169c4b9d" containerName="ovn-config" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.215464 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7feaf89-c705-432e-bbd4-a5ee64d7c971" containerName="mariadb-account-create-update" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.215481 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a351090f-f5d7-4d4d-8339-94de169c4b9d" containerName="ovn-config" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.216195 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.219297 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.231286 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-pnnfb"] Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.321723 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.322036 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.322200 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.322453 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrv4p\" (UniqueName: \"kubernetes.io/projected/bcb366d2-e585-41b2-aa75-5835b1576635-kube-api-access-hrv4p\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.322565 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-svc\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.322678 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-config\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.424112 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-config\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.424291 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.424374 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.424459 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.424712 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrv4p\" (UniqueName: \"kubernetes.io/projected/bcb366d2-e585-41b2-aa75-5835b1576635-kube-api-access-hrv4p\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.424783 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-svc\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.426175 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.426799 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-svc\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.428045 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-config\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.428074 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.428549 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.449835 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrv4p\" (UniqueName: \"kubernetes.io/projected/bcb366d2-e585-41b2-aa75-5835b1576635-kube-api-access-hrv4p\") pod \"dnsmasq-dns-764c5664d7-pnnfb\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:21 crc kubenswrapper[4975]: I0122 10:55:21.533068 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:22 crc kubenswrapper[4975]: I0122 10:55:22.038387 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-pnnfb"] Jan 22 10:55:22 crc kubenswrapper[4975]: W0122 10:55:22.043378 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb366d2_e585_41b2_aa75_5835b1576635.slice/crio-70c572940440c0b05fbed29997ccc036150c032137aefeb17bd4fcecea76a1b1 WatchSource:0}: Error finding container 70c572940440c0b05fbed29997ccc036150c032137aefeb17bd4fcecea76a1b1: Status 404 returned error can't find the container with id 70c572940440c0b05fbed29997ccc036150c032137aefeb17bd4fcecea76a1b1 Jan 22 10:55:22 crc kubenswrapper[4975]: I0122 10:55:22.845618 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" event={"ID":"bcb366d2-e585-41b2-aa75-5835b1576635","Type":"ContainerStarted","Data":"70c572940440c0b05fbed29997ccc036150c032137aefeb17bd4fcecea76a1b1"} Jan 22 10:55:23 crc kubenswrapper[4975]: I0122 10:55:23.129616 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:55:23 crc kubenswrapper[4975]: I0122 10:55:23.854085 4975 generic.go:334] "Generic (PLEG): container finished" podID="bcb366d2-e585-41b2-aa75-5835b1576635" containerID="d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd" exitCode=0 Jan 22 10:55:23 crc kubenswrapper[4975]: I0122 10:55:23.854145 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" event={"ID":"bcb366d2-e585-41b2-aa75-5835b1576635","Type":"ContainerDied","Data":"d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd"} Jan 22 10:55:24 crc kubenswrapper[4975]: I0122 10:55:24.864362 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" event={"ID":"bcb366d2-e585-41b2-aa75-5835b1576635","Type":"ContainerStarted","Data":"b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c"} Jan 22 10:55:24 crc kubenswrapper[4975]: I0122 10:55:24.864988 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:24 crc kubenswrapper[4975]: I0122 10:55:24.865964 4975 generic.go:334] "Generic (PLEG): container finished" podID="c0f9a87e-8307-44d7-a333-8eb76169882b" containerID="c9c7dd51b729be2c9be43e6649a6b76a8b411cfc2c3af80cdc9174f59372196e" exitCode=0 Jan 22 10:55:24 crc kubenswrapper[4975]: I0122 10:55:24.866003 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4hdn6" event={"ID":"c0f9a87e-8307-44d7-a333-8eb76169882b","Type":"ContainerDied","Data":"c9c7dd51b729be2c9be43e6649a6b76a8b411cfc2c3af80cdc9174f59372196e"} Jan 22 10:55:24 crc kubenswrapper[4975]: I0122 10:55:24.891906 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" podStartSLOduration=3.891884263 podStartE2EDuration="3.891884263s" podCreationTimestamp="2026-01-22 10:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:24.880241671 +0000 UTC m=+1117.538277791" watchObservedRunningTime="2026-01-22 10:55:24.891884263 +0000 UTC m=+1117.549920403" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.388986 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.517904 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-combined-ca-bundle\") pod \"c0f9a87e-8307-44d7-a333-8eb76169882b\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.518057 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-db-sync-config-data\") pod \"c0f9a87e-8307-44d7-a333-8eb76169882b\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.518131 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c0f9a87e-8307-44d7-a333-8eb76169882b-kube-api-access-5wdq5\") pod \"c0f9a87e-8307-44d7-a333-8eb76169882b\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.518850 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-config-data\") pod \"c0f9a87e-8307-44d7-a333-8eb76169882b\" (UID: \"c0f9a87e-8307-44d7-a333-8eb76169882b\") " Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.525770 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c0f9a87e-8307-44d7-a333-8eb76169882b" (UID: "c0f9a87e-8307-44d7-a333-8eb76169882b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.530565 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f9a87e-8307-44d7-a333-8eb76169882b-kube-api-access-5wdq5" (OuterVolumeSpecName: "kube-api-access-5wdq5") pod "c0f9a87e-8307-44d7-a333-8eb76169882b" (UID: "c0f9a87e-8307-44d7-a333-8eb76169882b"). InnerVolumeSpecName "kube-api-access-5wdq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.562976 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0f9a87e-8307-44d7-a333-8eb76169882b" (UID: "c0f9a87e-8307-44d7-a333-8eb76169882b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.568730 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-config-data" (OuterVolumeSpecName: "config-data") pod "c0f9a87e-8307-44d7-a333-8eb76169882b" (UID: "c0f9a87e-8307-44d7-a333-8eb76169882b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.621655 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.621691 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.621701 4975 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0f9a87e-8307-44d7-a333-8eb76169882b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.621710 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c0f9a87e-8307-44d7-a333-8eb76169882b-kube-api-access-5wdq5\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.884125 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4hdn6" event={"ID":"c0f9a87e-8307-44d7-a333-8eb76169882b","Type":"ContainerDied","Data":"d306e617d2647c54fc60d2efee4866bfaea5482f81604ba29f477c193c9010df"} Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.884169 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d306e617d2647c54fc60d2efee4866bfaea5482f81604ba29f477c193c9010df" Jan 22 10:55:26 crc kubenswrapper[4975]: I0122 10:55:26.884206 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4hdn6" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.343581 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-pnnfb"] Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.344026 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" containerName="dnsmasq-dns" containerID="cri-o://b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c" gracePeriod=10 Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.378106 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zl6sk"] Jan 22 10:55:27 crc kubenswrapper[4975]: E0122 10:55:27.386977 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f9a87e-8307-44d7-a333-8eb76169882b" containerName="glance-db-sync" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.387011 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f9a87e-8307-44d7-a333-8eb76169882b" containerName="glance-db-sync" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.387224 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f9a87e-8307-44d7-a333-8eb76169882b" containerName="glance-db-sync" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.388028 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.398518 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zl6sk"] Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.437794 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.437851 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.438823 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zgp\" (UniqueName: \"kubernetes.io/projected/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-kube-api-access-n4zgp\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.438902 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-config\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.438931 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.439002 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.439022 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.439043 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.540820 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-config\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.540866 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.540908 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.540928 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.540949 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.541002 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zgp\" (UniqueName: \"kubernetes.io/projected/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-kube-api-access-n4zgp\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.541773 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.541778 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-config\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.542206 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.542234 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.542426 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.575570 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zgp\" (UniqueName: \"kubernetes.io/projected/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-kube-api-access-n4zgp\") pod \"dnsmasq-dns-74f6bcbc87-zl6sk\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.707208 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.802392 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.893681 4975 generic.go:334] "Generic (PLEG): container finished" podID="bcb366d2-e585-41b2-aa75-5835b1576635" containerID="b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c" exitCode=0 Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.893730 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" event={"ID":"bcb366d2-e585-41b2-aa75-5835b1576635","Type":"ContainerDied","Data":"b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c"} Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.893755 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" event={"ID":"bcb366d2-e585-41b2-aa75-5835b1576635","Type":"ContainerDied","Data":"70c572940440c0b05fbed29997ccc036150c032137aefeb17bd4fcecea76a1b1"} Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.893771 4975 scope.go:117] "RemoveContainer" containerID="b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.893869 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-pnnfb" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.944257 4975 scope.go:117] "RemoveContainer" containerID="d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.956975 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-sb\") pod \"bcb366d2-e585-41b2-aa75-5835b1576635\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.957106 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-svc\") pod \"bcb366d2-e585-41b2-aa75-5835b1576635\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.957190 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-nb\") pod \"bcb366d2-e585-41b2-aa75-5835b1576635\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.957280 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-swift-storage-0\") pod \"bcb366d2-e585-41b2-aa75-5835b1576635\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.957339 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-config\") pod \"bcb366d2-e585-41b2-aa75-5835b1576635\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.957365 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrv4p\" (UniqueName: \"kubernetes.io/projected/bcb366d2-e585-41b2-aa75-5835b1576635-kube-api-access-hrv4p\") pod \"bcb366d2-e585-41b2-aa75-5835b1576635\" (UID: \"bcb366d2-e585-41b2-aa75-5835b1576635\") " Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.959602 4975 scope.go:117] "RemoveContainer" containerID="b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c" Jan 22 10:55:27 crc kubenswrapper[4975]: E0122 10:55:27.960661 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c\": container with ID starting with b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c not found: ID does not exist" containerID="b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.960694 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c"} err="failed to get container status \"b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c\": rpc error: code = NotFound desc = could not find container \"b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c\": container with ID starting with b4afe5b40b78042f69b23a5f6c9e8349f463bd7cc8686df718b4e910aab0b68c not found: ID does not exist" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.960716 4975 scope.go:117] "RemoveContainer" containerID="d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd" Jan 22 10:55:27 crc kubenswrapper[4975]: E0122 10:55:27.961314 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd\": container with ID starting with d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd not found: ID does not exist" containerID="d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.961525 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd"} err="failed to get container status \"d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd\": rpc error: code = NotFound desc = could not find container \"d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd\": container with ID starting with d16357d5d9d77135a001e8c14e98e6a5b4e2ff73dc44033927ec9710d805cefd not found: ID does not exist" Jan 22 10:55:27 crc kubenswrapper[4975]: I0122 10:55:27.968096 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb366d2-e585-41b2-aa75-5835b1576635-kube-api-access-hrv4p" (OuterVolumeSpecName: "kube-api-access-hrv4p") pod "bcb366d2-e585-41b2-aa75-5835b1576635" (UID: "bcb366d2-e585-41b2-aa75-5835b1576635"). InnerVolumeSpecName "kube-api-access-hrv4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.025170 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bcb366d2-e585-41b2-aa75-5835b1576635" (UID: "bcb366d2-e585-41b2-aa75-5835b1576635"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.026189 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bcb366d2-e585-41b2-aa75-5835b1576635" (UID: "bcb366d2-e585-41b2-aa75-5835b1576635"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.026535 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-config" (OuterVolumeSpecName: "config") pod "bcb366d2-e585-41b2-aa75-5835b1576635" (UID: "bcb366d2-e585-41b2-aa75-5835b1576635"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.028457 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bcb366d2-e585-41b2-aa75-5835b1576635" (UID: "bcb366d2-e585-41b2-aa75-5835b1576635"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.043507 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bcb366d2-e585-41b2-aa75-5835b1576635" (UID: "bcb366d2-e585-41b2-aa75-5835b1576635"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.059412 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.059446 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.059454 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.059463 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.059473 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb366d2-e585-41b2-aa75-5835b1576635-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.059483 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrv4p\" (UniqueName: \"kubernetes.io/projected/bcb366d2-e585-41b2-aa75-5835b1576635-kube-api-access-hrv4p\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.191554 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zl6sk"] Jan 22 10:55:28 crc kubenswrapper[4975]: W0122 10:55:28.198666 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23ee3bef_bd02_40d9_8f64_d6cb3f79f9b1.slice/crio-2fd71df3d83069b22058a1c8d4452517802f7e0d52e054d4fbe5ba9f54e59688 WatchSource:0}: Error finding container 2fd71df3d83069b22058a1c8d4452517802f7e0d52e054d4fbe5ba9f54e59688: Status 404 returned error can't find the container with id 2fd71df3d83069b22058a1c8d4452517802f7e0d52e054d4fbe5ba9f54e59688 Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.227758 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-pnnfb"] Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.232789 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-pnnfb"] Jan 22 10:55:28 crc kubenswrapper[4975]: I0122 10:55:28.902315 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" event={"ID":"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1","Type":"ContainerStarted","Data":"2fd71df3d83069b22058a1c8d4452517802f7e0d52e054d4fbe5ba9f54e59688"} Jan 22 10:55:29 crc kubenswrapper[4975]: I0122 10:55:29.765318 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" path="/var/lib/kubelet/pods/bcb366d2-e585-41b2-aa75-5835b1576635/volumes" Jan 22 10:55:31 crc kubenswrapper[4975]: I0122 10:55:31.926416 4975 generic.go:334] "Generic (PLEG): container finished" podID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerID="79521bd816a20ac3d6fb126f93f3a04b9bbb1c8b538d914765fe3fd8b3363495" exitCode=0 Jan 22 10:55:31 crc kubenswrapper[4975]: I0122 10:55:31.926493 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" event={"ID":"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1","Type":"ContainerDied","Data":"79521bd816a20ac3d6fb126f93f3a04b9bbb1c8b538d914765fe3fd8b3363495"} Jan 22 10:55:32 crc kubenswrapper[4975]: I0122 10:55:32.939245 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" event={"ID":"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1","Type":"ContainerStarted","Data":"751f5849adc66c64e151d777eb035ac1de0e1d14f0c21f2523604318d4638867"} Jan 22 10:55:32 crc kubenswrapper[4975]: I0122 10:55:32.939975 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:32 crc kubenswrapper[4975]: I0122 10:55:32.977568 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podStartSLOduration=5.9775448650000005 podStartE2EDuration="5.977544865s" podCreationTimestamp="2026-01-22 10:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:32.96703753 +0000 UTC m=+1125.625073670" watchObservedRunningTime="2026-01-22 10:55:32.977544865 +0000 UTC m=+1125.635581005" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.030557 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.361485 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gwzsm"] Jan 22 10:55:33 crc kubenswrapper[4975]: E0122 10:55:33.361899 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" containerName="init" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.361919 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" containerName="init" Jan 22 10:55:33 crc kubenswrapper[4975]: E0122 10:55:33.361935 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" containerName="dnsmasq-dns" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.361942 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" containerName="dnsmasq-dns" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.362127 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb366d2-e585-41b2-aa75-5835b1576635" containerName="dnsmasq-dns" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.362749 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.394379 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gwzsm"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.448526 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-operator-scripts\") pod \"cinder-db-create-gwzsm\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.448873 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m552\" (UniqueName: \"kubernetes.io/projected/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-kube-api-access-7m552\") pod \"cinder-db-create-gwzsm\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.461800 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-48drg"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.462971 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.485019 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9f8d-account-create-update-458g6"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.486035 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.497214 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.508958 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-48drg"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.523780 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9f8d-account-create-update-458g6"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.550791 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcn6p\" (UniqueName: \"kubernetes.io/projected/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-kube-api-access-dcn6p\") pod \"barbican-db-create-48drg\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.550901 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m552\" (UniqueName: \"kubernetes.io/projected/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-kube-api-access-7m552\") pod \"cinder-db-create-gwzsm\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.550961 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-operator-scripts\") pod \"barbican-db-create-48drg\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.550999 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694b6014-c433-4b63-b320-93da981b6674-operator-scripts\") pod \"cinder-9f8d-account-create-update-458g6\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.551047 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wrm\" (UniqueName: \"kubernetes.io/projected/694b6014-c433-4b63-b320-93da981b6674-kube-api-access-l5wrm\") pod \"cinder-9f8d-account-create-update-458g6\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.551093 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-operator-scripts\") pod \"cinder-db-create-gwzsm\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.551877 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-operator-scripts\") pod \"cinder-db-create-gwzsm\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.571343 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e9a6-account-create-update-6bcvq"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.573834 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.577475 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.577802 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m552\" (UniqueName: \"kubernetes.io/projected/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-kube-api-access-7m552\") pod \"cinder-db-create-gwzsm\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.584017 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e9a6-account-create-update-6bcvq"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.652915 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5wrm\" (UniqueName: \"kubernetes.io/projected/694b6014-c433-4b63-b320-93da981b6674-kube-api-access-l5wrm\") pod \"cinder-9f8d-account-create-update-458g6\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653000 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhj6\" (UniqueName: \"kubernetes.io/projected/cc532fde-1153-48d1-b6f6-a7c4672549b2-kube-api-access-dvhj6\") pod \"barbican-e9a6-account-create-update-6bcvq\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653030 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcn6p\" (UniqueName: \"kubernetes.io/projected/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-kube-api-access-dcn6p\") pod \"barbican-db-create-48drg\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653223 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-operator-scripts\") pod \"barbican-db-create-48drg\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653298 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694b6014-c433-4b63-b320-93da981b6674-operator-scripts\") pod \"cinder-9f8d-account-create-update-458g6\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653363 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc532fde-1153-48d1-b6f6-a7c4672549b2-operator-scripts\") pod \"barbican-e9a6-account-create-update-6bcvq\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653974 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694b6014-c433-4b63-b320-93da981b6674-operator-scripts\") pod \"cinder-9f8d-account-create-update-458g6\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.653976 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-operator-scripts\") pod \"barbican-db-create-48drg\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.672852 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcn6p\" (UniqueName: \"kubernetes.io/projected/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-kube-api-access-dcn6p\") pod \"barbican-db-create-48drg\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.678096 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.688076 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5wrm\" (UniqueName: \"kubernetes.io/projected/694b6014-c433-4b63-b320-93da981b6674-kube-api-access-l5wrm\") pod \"cinder-9f8d-account-create-update-458g6\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.755453 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc532fde-1153-48d1-b6f6-a7c4672549b2-operator-scripts\") pod \"barbican-e9a6-account-create-update-6bcvq\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.755556 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhj6\" (UniqueName: \"kubernetes.io/projected/cc532fde-1153-48d1-b6f6-a7c4672549b2-kube-api-access-dvhj6\") pod \"barbican-e9a6-account-create-update-6bcvq\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.756631 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc532fde-1153-48d1-b6f6-a7c4672549b2-operator-scripts\") pod \"barbican-e9a6-account-create-update-6bcvq\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.790884 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-48drg" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.809309 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhj6\" (UniqueName: \"kubernetes.io/projected/cc532fde-1153-48d1-b6f6-a7c4672549b2-kube-api-access-dvhj6\") pod \"barbican-e9a6-account-create-update-6bcvq\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.812673 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.821011 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9wqpd"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.822905 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.830825 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-71db-account-create-update-g87h8"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.834069 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.841311 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9wqpd"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.849717 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-71db-account-create-update-g87h8"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.868359 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.905491 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bmksp"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.906857 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.910539 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.910870 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.910996 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.911122 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dr2s2" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.913085 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bmksp"] Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.935712 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.970987 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvdg\" (UniqueName: \"kubernetes.io/projected/56199b69-ad45-4d1a-8c20-ff267c81eab0-kube-api-access-5nvdg\") pod \"neutron-71db-account-create-update-g87h8\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.971021 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/243d5a33-350a-4e38-8426-44a995a7f7dd-operator-scripts\") pod \"neutron-db-create-9wqpd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.971050 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-config-data\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.971072 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtvvl\" (UniqueName: \"kubernetes.io/projected/70d3e762-9e06-4bac-9854-780c955df21e-kube-api-access-rtvvl\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.971092 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkr2\" (UniqueName: \"kubernetes.io/projected/243d5a33-350a-4e38-8426-44a995a7f7dd-kube-api-access-jvkr2\") pod \"neutron-db-create-9wqpd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.971142 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-combined-ca-bundle\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:33 crc kubenswrapper[4975]: I0122 10:55:33.971207 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56199b69-ad45-4d1a-8c20-ff267c81eab0-operator-scripts\") pod \"neutron-71db-account-create-update-g87h8\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072506 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56199b69-ad45-4d1a-8c20-ff267c81eab0-operator-scripts\") pod \"neutron-71db-account-create-update-g87h8\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072600 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nvdg\" (UniqueName: \"kubernetes.io/projected/56199b69-ad45-4d1a-8c20-ff267c81eab0-kube-api-access-5nvdg\") pod \"neutron-71db-account-create-update-g87h8\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072634 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/243d5a33-350a-4e38-8426-44a995a7f7dd-operator-scripts\") pod \"neutron-db-create-9wqpd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072660 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-config-data\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072682 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtvvl\" (UniqueName: \"kubernetes.io/projected/70d3e762-9e06-4bac-9854-780c955df21e-kube-api-access-rtvvl\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072710 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkr2\" (UniqueName: \"kubernetes.io/projected/243d5a33-350a-4e38-8426-44a995a7f7dd-kube-api-access-jvkr2\") pod \"neutron-db-create-9wqpd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.072753 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-combined-ca-bundle\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.074023 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56199b69-ad45-4d1a-8c20-ff267c81eab0-operator-scripts\") pod \"neutron-71db-account-create-update-g87h8\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.074981 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/243d5a33-350a-4e38-8426-44a995a7f7dd-operator-scripts\") pod \"neutron-db-create-9wqpd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.078407 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-config-data\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.079743 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-combined-ca-bundle\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.103417 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtvvl\" (UniqueName: \"kubernetes.io/projected/70d3e762-9e06-4bac-9854-780c955df21e-kube-api-access-rtvvl\") pod \"keystone-db-sync-bmksp\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.113161 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nvdg\" (UniqueName: \"kubernetes.io/projected/56199b69-ad45-4d1a-8c20-ff267c81eab0-kube-api-access-5nvdg\") pod \"neutron-71db-account-create-update-g87h8\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.122898 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkr2\" (UniqueName: \"kubernetes.io/projected/243d5a33-350a-4e38-8426-44a995a7f7dd-kube-api-access-jvkr2\") pod \"neutron-db-create-9wqpd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.206131 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.234670 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.253810 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.281826 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gwzsm"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.320212 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-48drg"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.568884 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e9a6-account-create-update-6bcvq"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.681395 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9f8d-account-create-update-458g6"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.784633 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9wqpd"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.873006 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bmksp"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.918103 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-71db-account-create-update-g87h8"] Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.965971 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwzsm" event={"ID":"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff","Type":"ContainerStarted","Data":"6a46c7a143f953312f9619c592f4398fa18d7f1063548b8997e5d46b0e89ad6d"} Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.966008 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwzsm" event={"ID":"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff","Type":"ContainerStarted","Data":"a243dabe607de283412a3418443fa08cce39f5c27749a7e67b8074d4d6609cbf"} Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.973951 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9wqpd" event={"ID":"243d5a33-350a-4e38-8426-44a995a7f7dd","Type":"ContainerStarted","Data":"af76904af59f84a1471ba5dbce0be6159c7c33656af219b02a4367d302f30b29"} Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.982408 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-gwzsm" podStartSLOduration=1.982389751 podStartE2EDuration="1.982389751s" podCreationTimestamp="2026-01-22 10:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:34.980315221 +0000 UTC m=+1127.638351331" watchObservedRunningTime="2026-01-22 10:55:34.982389751 +0000 UTC m=+1127.640425861" Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.990575 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e9a6-account-create-update-6bcvq" event={"ID":"cc532fde-1153-48d1-b6f6-a7c4672549b2","Type":"ContainerStarted","Data":"b7c1a7424e54218d5dcdcd917bbdf77eb3e476a65ad3018720666ca105c79038"} Jan 22 10:55:34 crc kubenswrapper[4975]: I0122 10:55:34.999466 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bmksp" event={"ID":"70d3e762-9e06-4bac-9854-780c955df21e","Type":"ContainerStarted","Data":"ded15c46e5a9d5389adb75a08f0412bd23f31cdf99c820e7897a45cf6b0699ef"} Jan 22 10:55:35 crc kubenswrapper[4975]: I0122 10:55:35.009944 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-48drg" event={"ID":"78fe7f97-1158-437d-b365-2d2ae6ecd6b1","Type":"ContainerStarted","Data":"b060affecd1b1037a6ec65f2c6bf1e98d7f1a8c8ce711986ba10c5735b28745e"} Jan 22 10:55:35 crc kubenswrapper[4975]: I0122 10:55:35.010005 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-48drg" event={"ID":"78fe7f97-1158-437d-b365-2d2ae6ecd6b1","Type":"ContainerStarted","Data":"7d8e674a20e40a8e4b06e11b152e3ece6d8245fbe3c6dddbe8085ca6e2a5e3fc"} Jan 22 10:55:35 crc kubenswrapper[4975]: I0122 10:55:35.014991 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-71db-account-create-update-g87h8" event={"ID":"56199b69-ad45-4d1a-8c20-ff267c81eab0","Type":"ContainerStarted","Data":"5e938dee32af94837af89f06e028448ab4b05a31b1f587d119c7b59eb4b3f522"} Jan 22 10:55:35 crc kubenswrapper[4975]: I0122 10:55:35.015732 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f8d-account-create-update-458g6" event={"ID":"694b6014-c433-4b63-b320-93da981b6674","Type":"ContainerStarted","Data":"e52730cb967703b390f5484dc33bfd5f712d80efb6c63a72913e362ccdaf62ac"} Jan 22 10:55:35 crc kubenswrapper[4975]: I0122 10:55:35.039612 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-48drg" podStartSLOduration=2.039591669 podStartE2EDuration="2.039591669s" podCreationTimestamp="2026-01-22 10:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:35.024695358 +0000 UTC m=+1127.682731468" watchObservedRunningTime="2026-01-22 10:55:35.039591669 +0000 UTC m=+1127.697627779" Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.024839 4975 generic.go:334] "Generic (PLEG): container finished" podID="78fe7f97-1158-437d-b365-2d2ae6ecd6b1" containerID="b060affecd1b1037a6ec65f2c6bf1e98d7f1a8c8ce711986ba10c5735b28745e" exitCode=0 Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.025088 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-48drg" event={"ID":"78fe7f97-1158-437d-b365-2d2ae6ecd6b1","Type":"ContainerDied","Data":"b060affecd1b1037a6ec65f2c6bf1e98d7f1a8c8ce711986ba10c5735b28745e"} Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.027033 4975 generic.go:334] "Generic (PLEG): container finished" podID="56199b69-ad45-4d1a-8c20-ff267c81eab0" containerID="302f4fb16f3e5775c821ccb4e72c026b55f052b127345865eb92b19747da9366" exitCode=0 Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.027084 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-71db-account-create-update-g87h8" event={"ID":"56199b69-ad45-4d1a-8c20-ff267c81eab0","Type":"ContainerDied","Data":"302f4fb16f3e5775c821ccb4e72c026b55f052b127345865eb92b19747da9366"} Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.033603 4975 generic.go:334] "Generic (PLEG): container finished" podID="694b6014-c433-4b63-b320-93da981b6674" containerID="71178dd7435780958d0a9b0a4da0a2a9975a4546dccc68f912a46198e7a950bb" exitCode=0 Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.033776 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f8d-account-create-update-458g6" event={"ID":"694b6014-c433-4b63-b320-93da981b6674","Type":"ContainerDied","Data":"71178dd7435780958d0a9b0a4da0a2a9975a4546dccc68f912a46198e7a950bb"} Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.035723 4975 generic.go:334] "Generic (PLEG): container finished" podID="e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" containerID="6a46c7a143f953312f9619c592f4398fa18d7f1063548b8997e5d46b0e89ad6d" exitCode=0 Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.035871 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwzsm" event={"ID":"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff","Type":"ContainerDied","Data":"6a46c7a143f953312f9619c592f4398fa18d7f1063548b8997e5d46b0e89ad6d"} Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.037408 4975 generic.go:334] "Generic (PLEG): container finished" podID="243d5a33-350a-4e38-8426-44a995a7f7dd" containerID="5cf8449da87971b26339e7ac5c7b5e19a6d1b97c5abb82e29be306bd194d7f2e" exitCode=0 Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.037523 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9wqpd" event={"ID":"243d5a33-350a-4e38-8426-44a995a7f7dd","Type":"ContainerDied","Data":"5cf8449da87971b26339e7ac5c7b5e19a6d1b97c5abb82e29be306bd194d7f2e"} Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.040975 4975 generic.go:334] "Generic (PLEG): container finished" podID="cc532fde-1153-48d1-b6f6-a7c4672549b2" containerID="1a7ad298b3c0975e0f4b13cb3bf93f04f6f28871ac05341459dea0561dd69c3a" exitCode=0 Jan 22 10:55:36 crc kubenswrapper[4975]: I0122 10:55:36.041023 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e9a6-account-create-update-6bcvq" event={"ID":"cc532fde-1153-48d1-b6f6-a7c4672549b2","Type":"ContainerDied","Data":"1a7ad298b3c0975e0f4b13cb3bf93f04f6f28871ac05341459dea0561dd69c3a"} Jan 22 10:55:37 crc kubenswrapper[4975]: I0122 10:55:37.709502 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:55:37 crc kubenswrapper[4975]: I0122 10:55:37.800304 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7dw86"] Jan 22 10:55:37 crc kubenswrapper[4975]: I0122 10:55:37.800570 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7dw86" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="dnsmasq-dns" containerID="cri-o://95b80824a79fea3e12f167b859314c6382cb9abd65e1b5f0aa7c2febd3ffd38e" gracePeriod=10 Jan 22 10:55:38 crc kubenswrapper[4975]: I0122 10:55:38.059543 4975 generic.go:334] "Generic (PLEG): container finished" podID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerID="95b80824a79fea3e12f167b859314c6382cb9abd65e1b5f0aa7c2febd3ffd38e" exitCode=0 Jan 22 10:55:38 crc kubenswrapper[4975]: I0122 10:55:38.059619 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7dw86" event={"ID":"c13e808a-df7b-4367-b5e3-17aab90f3699","Type":"ContainerDied","Data":"95b80824a79fea3e12f167b859314c6382cb9abd65e1b5f0aa7c2febd3ffd38e"} Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.107892 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7dw86" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.742003 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.767415 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.787850 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-48drg" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.796850 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.812914 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.821492 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.857633 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694b6014-c433-4b63-b320-93da981b6674-operator-scripts\") pod \"694b6014-c433-4b63-b320-93da981b6674\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.857729 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-operator-scripts\") pod \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.857810 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m552\" (UniqueName: \"kubernetes.io/projected/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-kube-api-access-7m552\") pod \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\" (UID: \"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.857860 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5wrm\" (UniqueName: \"kubernetes.io/projected/694b6014-c433-4b63-b320-93da981b6674-kube-api-access-l5wrm\") pod \"694b6014-c433-4b63-b320-93da981b6674\" (UID: \"694b6014-c433-4b63-b320-93da981b6674\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.859862 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" (UID: "e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.861154 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694b6014-c433-4b63-b320-93da981b6674-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "694b6014-c433-4b63-b320-93da981b6674" (UID: "694b6014-c433-4b63-b320-93da981b6674"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.863365 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694b6014-c433-4b63-b320-93da981b6674-kube-api-access-l5wrm" (OuterVolumeSpecName: "kube-api-access-l5wrm") pod "694b6014-c433-4b63-b320-93da981b6674" (UID: "694b6014-c433-4b63-b320-93da981b6674"). InnerVolumeSpecName "kube-api-access-l5wrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.864009 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-kube-api-access-7m552" (OuterVolumeSpecName: "kube-api-access-7m552") pod "e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" (UID: "e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff"). InnerVolumeSpecName "kube-api-access-7m552". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.881713 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959644 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nvdg\" (UniqueName: \"kubernetes.io/projected/56199b69-ad45-4d1a-8c20-ff267c81eab0-kube-api-access-5nvdg\") pod \"56199b69-ad45-4d1a-8c20-ff267c81eab0\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959698 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvkr2\" (UniqueName: \"kubernetes.io/projected/243d5a33-350a-4e38-8426-44a995a7f7dd-kube-api-access-jvkr2\") pod \"243d5a33-350a-4e38-8426-44a995a7f7dd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959725 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc532fde-1153-48d1-b6f6-a7c4672549b2-operator-scripts\") pod \"cc532fde-1153-48d1-b6f6-a7c4672549b2\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959748 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56199b69-ad45-4d1a-8c20-ff267c81eab0-operator-scripts\") pod \"56199b69-ad45-4d1a-8c20-ff267c81eab0\" (UID: \"56199b69-ad45-4d1a-8c20-ff267c81eab0\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959793 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcn6p\" (UniqueName: \"kubernetes.io/projected/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-kube-api-access-dcn6p\") pod \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959851 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhj6\" (UniqueName: \"kubernetes.io/projected/cc532fde-1153-48d1-b6f6-a7c4672549b2-kube-api-access-dvhj6\") pod \"cc532fde-1153-48d1-b6f6-a7c4672549b2\" (UID: \"cc532fde-1153-48d1-b6f6-a7c4672549b2\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959948 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/243d5a33-350a-4e38-8426-44a995a7f7dd-operator-scripts\") pod \"243d5a33-350a-4e38-8426-44a995a7f7dd\" (UID: \"243d5a33-350a-4e38-8426-44a995a7f7dd\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.959976 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-operator-scripts\") pod \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\" (UID: \"78fe7f97-1158-437d-b365-2d2ae6ecd6b1\") " Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.960364 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.960386 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m552\" (UniqueName: \"kubernetes.io/projected/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff-kube-api-access-7m552\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.960400 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5wrm\" (UniqueName: \"kubernetes.io/projected/694b6014-c433-4b63-b320-93da981b6674-kube-api-access-l5wrm\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.960414 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694b6014-c433-4b63-b320-93da981b6674-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.960861 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78fe7f97-1158-437d-b365-2d2ae6ecd6b1" (UID: "78fe7f97-1158-437d-b365-2d2ae6ecd6b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.961541 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc532fde-1153-48d1-b6f6-a7c4672549b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc532fde-1153-48d1-b6f6-a7c4672549b2" (UID: "cc532fde-1153-48d1-b6f6-a7c4672549b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.961596 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56199b69-ad45-4d1a-8c20-ff267c81eab0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56199b69-ad45-4d1a-8c20-ff267c81eab0" (UID: "56199b69-ad45-4d1a-8c20-ff267c81eab0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.962028 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/243d5a33-350a-4e38-8426-44a995a7f7dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "243d5a33-350a-4e38-8426-44a995a7f7dd" (UID: "243d5a33-350a-4e38-8426-44a995a7f7dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.964674 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-kube-api-access-dcn6p" (OuterVolumeSpecName: "kube-api-access-dcn6p") pod "78fe7f97-1158-437d-b365-2d2ae6ecd6b1" (UID: "78fe7f97-1158-437d-b365-2d2ae6ecd6b1"). InnerVolumeSpecName "kube-api-access-dcn6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.965178 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc532fde-1153-48d1-b6f6-a7c4672549b2-kube-api-access-dvhj6" (OuterVolumeSpecName: "kube-api-access-dvhj6") pod "cc532fde-1153-48d1-b6f6-a7c4672549b2" (UID: "cc532fde-1153-48d1-b6f6-a7c4672549b2"). InnerVolumeSpecName "kube-api-access-dvhj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.965869 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243d5a33-350a-4e38-8426-44a995a7f7dd-kube-api-access-jvkr2" (OuterVolumeSpecName: "kube-api-access-jvkr2") pod "243d5a33-350a-4e38-8426-44a995a7f7dd" (UID: "243d5a33-350a-4e38-8426-44a995a7f7dd"). InnerVolumeSpecName "kube-api-access-jvkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:41 crc kubenswrapper[4975]: I0122 10:55:41.966522 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56199b69-ad45-4d1a-8c20-ff267c81eab0-kube-api-access-5nvdg" (OuterVolumeSpecName: "kube-api-access-5nvdg") pod "56199b69-ad45-4d1a-8c20-ff267c81eab0" (UID: "56199b69-ad45-4d1a-8c20-ff267c81eab0"). InnerVolumeSpecName "kube-api-access-5nvdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.062107 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-nb\") pod \"c13e808a-df7b-4367-b5e3-17aab90f3699\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.064227 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6vfn\" (UniqueName: \"kubernetes.io/projected/c13e808a-df7b-4367-b5e3-17aab90f3699-kube-api-access-b6vfn\") pod \"c13e808a-df7b-4367-b5e3-17aab90f3699\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.064325 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-config\") pod \"c13e808a-df7b-4367-b5e3-17aab90f3699\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.064541 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-sb\") pod \"c13e808a-df7b-4367-b5e3-17aab90f3699\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.064815 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-dns-svc\") pod \"c13e808a-df7b-4367-b5e3-17aab90f3699\" (UID: \"c13e808a-df7b-4367-b5e3-17aab90f3699\") " Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.065976 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhj6\" (UniqueName: \"kubernetes.io/projected/cc532fde-1153-48d1-b6f6-a7c4672549b2-kube-api-access-dvhj6\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066021 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/243d5a33-350a-4e38-8426-44a995a7f7dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066043 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066062 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nvdg\" (UniqueName: \"kubernetes.io/projected/56199b69-ad45-4d1a-8c20-ff267c81eab0-kube-api-access-5nvdg\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066079 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvkr2\" (UniqueName: \"kubernetes.io/projected/243d5a33-350a-4e38-8426-44a995a7f7dd-kube-api-access-jvkr2\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066096 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc532fde-1153-48d1-b6f6-a7c4672549b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066113 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56199b69-ad45-4d1a-8c20-ff267c81eab0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.066130 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcn6p\" (UniqueName: \"kubernetes.io/projected/78fe7f97-1158-437d-b365-2d2ae6ecd6b1-kube-api-access-dcn6p\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.070731 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13e808a-df7b-4367-b5e3-17aab90f3699-kube-api-access-b6vfn" (OuterVolumeSpecName: "kube-api-access-b6vfn") pod "c13e808a-df7b-4367-b5e3-17aab90f3699" (UID: "c13e808a-df7b-4367-b5e3-17aab90f3699"). InnerVolumeSpecName "kube-api-access-b6vfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.105925 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9wqpd" event={"ID":"243d5a33-350a-4e38-8426-44a995a7f7dd","Type":"ContainerDied","Data":"af76904af59f84a1471ba5dbce0be6159c7c33656af219b02a4367d302f30b29"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.105982 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9wqpd" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.105998 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af76904af59f84a1471ba5dbce0be6159c7c33656af219b02a4367d302f30b29" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.108907 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e9a6-account-create-update-6bcvq" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.108910 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e9a6-account-create-update-6bcvq" event={"ID":"cc532fde-1153-48d1-b6f6-a7c4672549b2","Type":"ContainerDied","Data":"b7c1a7424e54218d5dcdcd917bbdf77eb3e476a65ad3018720666ca105c79038"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.109072 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c1a7424e54218d5dcdcd917bbdf77eb3e476a65ad3018720666ca105c79038" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.111538 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bmksp" event={"ID":"70d3e762-9e06-4bac-9854-780c955df21e","Type":"ContainerStarted","Data":"ad1ee4e20a9d65eb86d33293edec49115a098261494983322f2b44ab17cf131b"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.113541 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c13e808a-df7b-4367-b5e3-17aab90f3699" (UID: "c13e808a-df7b-4367-b5e3-17aab90f3699"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.114387 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-48drg" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.114552 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-48drg" event={"ID":"78fe7f97-1158-437d-b365-2d2ae6ecd6b1","Type":"ContainerDied","Data":"7d8e674a20e40a8e4b06e11b152e3ece6d8245fbe3c6dddbe8085ca6e2a5e3fc"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.114597 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8e674a20e40a8e4b06e11b152e3ece6d8245fbe3c6dddbe8085ca6e2a5e3fc" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.121137 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-71db-account-create-update-g87h8" event={"ID":"56199b69-ad45-4d1a-8c20-ff267c81eab0","Type":"ContainerDied","Data":"5e938dee32af94837af89f06e028448ab4b05a31b1f587d119c7b59eb4b3f522"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.121409 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e938dee32af94837af89f06e028448ab4b05a31b1f587d119c7b59eb4b3f522" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.121523 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-71db-account-create-update-g87h8" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.128667 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f8d-account-create-update-458g6" event={"ID":"694b6014-c433-4b63-b320-93da981b6674","Type":"ContainerDied","Data":"e52730cb967703b390f5484dc33bfd5f712d80efb6c63a72913e362ccdaf62ac"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.128718 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f8d-account-create-update-458g6" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.128724 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e52730cb967703b390f5484dc33bfd5f712d80efb6c63a72913e362ccdaf62ac" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.132837 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwzsm" event={"ID":"e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff","Type":"ContainerDied","Data":"a243dabe607de283412a3418443fa08cce39f5c27749a7e67b8074d4d6609cbf"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.132895 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a243dabe607de283412a3418443fa08cce39f5c27749a7e67b8074d4d6609cbf" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.134220 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwzsm" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.140272 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-config" (OuterVolumeSpecName: "config") pod "c13e808a-df7b-4367-b5e3-17aab90f3699" (UID: "c13e808a-df7b-4367-b5e3-17aab90f3699"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.142482 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7dw86" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.142524 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7dw86" event={"ID":"c13e808a-df7b-4367-b5e3-17aab90f3699","Type":"ContainerDied","Data":"f0f2277de2e8d59765aed493426fd3a2f2f0ed0abdf51be53d9da8e2d514f0ef"} Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.142586 4975 scope.go:117] "RemoveContainer" containerID="95b80824a79fea3e12f167b859314c6382cb9abd65e1b5f0aa7c2febd3ffd38e" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.143968 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bmksp" podStartSLOduration=2.456670292 podStartE2EDuration="9.143948816s" podCreationTimestamp="2026-01-22 10:55:33 +0000 UTC" firstStartedPulling="2026-01-22 10:55:34.907598836 +0000 UTC m=+1127.565634946" lastFinishedPulling="2026-01-22 10:55:41.59487732 +0000 UTC m=+1134.252913470" observedRunningTime="2026-01-22 10:55:42.133605256 +0000 UTC m=+1134.791641356" watchObservedRunningTime="2026-01-22 10:55:42.143948816 +0000 UTC m=+1134.801984936" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.151747 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c13e808a-df7b-4367-b5e3-17aab90f3699" (UID: "c13e808a-df7b-4367-b5e3-17aab90f3699"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.165436 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c13e808a-df7b-4367-b5e3-17aab90f3699" (UID: "c13e808a-df7b-4367-b5e3-17aab90f3699"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.170062 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.170102 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.170117 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.170129 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6vfn\" (UniqueName: \"kubernetes.io/projected/c13e808a-df7b-4367-b5e3-17aab90f3699-kube-api-access-b6vfn\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.170146 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c13e808a-df7b-4367-b5e3-17aab90f3699-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.178306 4975 scope.go:117] "RemoveContainer" containerID="d141807d6443e9bfe1060d5fe3ca52904b08d6e1149ce88632d6ae4c94fb0003" Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.497863 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7dw86"] Jan 22 10:55:42 crc kubenswrapper[4975]: I0122 10:55:42.508994 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7dw86"] Jan 22 10:55:43 crc kubenswrapper[4975]: I0122 10:55:43.772142 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" path="/var/lib/kubelet/pods/c13e808a-df7b-4367-b5e3-17aab90f3699/volumes" Jan 22 10:55:45 crc kubenswrapper[4975]: I0122 10:55:45.177200 4975 generic.go:334] "Generic (PLEG): container finished" podID="70d3e762-9e06-4bac-9854-780c955df21e" containerID="ad1ee4e20a9d65eb86d33293edec49115a098261494983322f2b44ab17cf131b" exitCode=0 Jan 22 10:55:45 crc kubenswrapper[4975]: I0122 10:55:45.177634 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bmksp" event={"ID":"70d3e762-9e06-4bac-9854-780c955df21e","Type":"ContainerDied","Data":"ad1ee4e20a9d65eb86d33293edec49115a098261494983322f2b44ab17cf131b"} Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.616041 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.752091 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtvvl\" (UniqueName: \"kubernetes.io/projected/70d3e762-9e06-4bac-9854-780c955df21e-kube-api-access-rtvvl\") pod \"70d3e762-9e06-4bac-9854-780c955df21e\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.752180 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-config-data\") pod \"70d3e762-9e06-4bac-9854-780c955df21e\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.752278 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-combined-ca-bundle\") pod \"70d3e762-9e06-4bac-9854-780c955df21e\" (UID: \"70d3e762-9e06-4bac-9854-780c955df21e\") " Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.759487 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d3e762-9e06-4bac-9854-780c955df21e-kube-api-access-rtvvl" (OuterVolumeSpecName: "kube-api-access-rtvvl") pod "70d3e762-9e06-4bac-9854-780c955df21e" (UID: "70d3e762-9e06-4bac-9854-780c955df21e"). InnerVolumeSpecName "kube-api-access-rtvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.787482 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d3e762-9e06-4bac-9854-780c955df21e" (UID: "70d3e762-9e06-4bac-9854-780c955df21e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.807510 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-config-data" (OuterVolumeSpecName: "config-data") pod "70d3e762-9e06-4bac-9854-780c955df21e" (UID: "70d3e762-9e06-4bac-9854-780c955df21e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.855472 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.855521 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtvvl\" (UniqueName: \"kubernetes.io/projected/70d3e762-9e06-4bac-9854-780c955df21e-kube-api-access-rtvvl\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:46 crc kubenswrapper[4975]: I0122 10:55:46.855542 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d3e762-9e06-4bac-9854-780c955df21e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.200704 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bmksp" event={"ID":"70d3e762-9e06-4bac-9854-780c955df21e","Type":"ContainerDied","Data":"ded15c46e5a9d5389adb75a08f0412bd23f31cdf99c820e7897a45cf6b0699ef"} Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.200758 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ded15c46e5a9d5389adb75a08f0412bd23f31cdf99c820e7897a45cf6b0699ef" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.200794 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bmksp" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.499083 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nhtvn"] Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.499901 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694b6014-c433-4b63-b320-93da981b6674" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.499927 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="694b6014-c433-4b63-b320-93da981b6674" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.499941 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56199b69-ad45-4d1a-8c20-ff267c81eab0" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.499950 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="56199b69-ad45-4d1a-8c20-ff267c81eab0" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.499964 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="init" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.499972 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="init" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.499989 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="243d5a33-350a-4e38-8426-44a995a7f7dd" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.499996 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="243d5a33-350a-4e38-8426-44a995a7f7dd" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.500016 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc532fde-1153-48d1-b6f6-a7c4672549b2" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500024 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc532fde-1153-48d1-b6f6-a7c4672549b2" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.500035 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="dnsmasq-dns" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500043 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="dnsmasq-dns" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.500062 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500070 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.500086 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fe7f97-1158-437d-b365-2d2ae6ecd6b1" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500095 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fe7f97-1158-437d-b365-2d2ae6ecd6b1" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: E0122 10:55:47.500115 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d3e762-9e06-4bac-9854-780c955df21e" containerName="keystone-db-sync" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500122 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d3e762-9e06-4bac-9854-780c955df21e" containerName="keystone-db-sync" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500355 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13e808a-df7b-4367-b5e3-17aab90f3699" containerName="dnsmasq-dns" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500371 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc532fde-1153-48d1-b6f6-a7c4672549b2" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500380 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="243d5a33-350a-4e38-8426-44a995a7f7dd" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500391 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="694b6014-c433-4b63-b320-93da981b6674" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500402 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="56199b69-ad45-4d1a-8c20-ff267c81eab0" containerName="mariadb-account-create-update" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500417 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d3e762-9e06-4bac-9854-780c955df21e" containerName="keystone-db-sync" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500430 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.500444 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fe7f97-1158-437d-b365-2d2ae6ecd6b1" containerName="mariadb-database-create" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.501108 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.505204 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.505460 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.505683 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.507682 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.510800 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dr2s2" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.521115 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wd8ph"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.522674 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.532599 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wd8ph"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.539280 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nhtvn"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669404 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669447 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qnw\" (UniqueName: \"kubernetes.io/projected/495c93ec-ddc9-4538-9e76-0adfe2cb255d-kube-api-access-v9qnw\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669538 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669572 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjgg\" (UniqueName: \"kubernetes.io/projected/d1484373-df23-4080-b6e8-9afc0ab6ec91-kube-api-access-dhjgg\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669597 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-scripts\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669701 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-fernet-keys\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669717 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-config\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669740 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-combined-ca-bundle\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669779 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669817 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-credential-keys\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669881 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-config-data\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.669901 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.688572 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x24jb"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.689515 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.693662 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.693883 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k2tzk" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.693936 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.743503 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x24jb"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.754434 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cf7789c7-gvvgh"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.755745 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.767474 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.767700 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tmrb8" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.795321 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.796238 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804405 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-db-sync-config-data\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804454 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804475 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjgg\" (UniqueName: \"kubernetes.io/projected/d1484373-df23-4080-b6e8-9afc0ab6ec91-kube-api-access-dhjgg\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804496 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-scripts\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804570 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2af291-1e93-47b6-bff9-14bba4d9d553-horizon-secret-key\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804599 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-fernet-keys\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804618 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-config\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804638 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-combined-ca-bundle\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804656 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804675 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-credential-keys\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804716 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnl69\" (UniqueName: \"kubernetes.io/projected/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-kube-api-access-qnl69\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804736 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-combined-ca-bundle\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804760 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-config-data\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804781 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804817 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmblj\" (UniqueName: \"kubernetes.io/projected/8d2af291-1e93-47b6-bff9-14bba4d9d553-kube-api-access-rmblj\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804838 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-config-data\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804878 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-scripts\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804908 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-scripts\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804930 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-config-data\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804959 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.804985 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9qnw\" (UniqueName: \"kubernetes.io/projected/495c93ec-ddc9-4538-9e76-0adfe2cb255d-kube-api-access-v9qnw\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.805015 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-etc-machine-id\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.805032 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2af291-1e93-47b6-bff9-14bba4d9d553-logs\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.805513 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.806717 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.819522 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.832826 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-combined-ca-bundle\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.833006 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.834682 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.840017 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-config\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.840713 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.840895 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.855107 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-config-data\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.859585 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-fernet-keys\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.863140 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cf7789c7-gvvgh"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.867699 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-scripts\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.881493 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.888447 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.892865 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-credential-keys\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.897422 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.897624 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.898234 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjgg\" (UniqueName: \"kubernetes.io/projected/d1484373-df23-4080-b6e8-9afc0ab6ec91-kube-api-access-dhjgg\") pod \"keystone-bootstrap-nhtvn\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907390 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-etc-machine-id\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907436 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2af291-1e93-47b6-bff9-14bba4d9d553-logs\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907462 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-db-sync-config-data\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907505 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2af291-1e93-47b6-bff9-14bba4d9d553-horizon-secret-key\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907544 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnl69\" (UniqueName: \"kubernetes.io/projected/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-kube-api-access-qnl69\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907561 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-combined-ca-bundle\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907589 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmblj\" (UniqueName: \"kubernetes.io/projected/8d2af291-1e93-47b6-bff9-14bba4d9d553-kube-api-access-rmblj\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907608 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-config-data\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907630 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-scripts\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907658 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-scripts\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907674 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-config-data\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.907826 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-etc-machine-id\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.908238 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2af291-1e93-47b6-bff9-14bba4d9d553-logs\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.909552 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-scripts\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.917165 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-combined-ca-bundle\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.927014 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-config-data\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.937506 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2af291-1e93-47b6-bff9-14bba4d9d553-horizon-secret-key\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.947866 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.948065 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.950502 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9qnw\" (UniqueName: \"kubernetes.io/projected/495c93ec-ddc9-4538-9e76-0adfe2cb255d-kube-api-access-v9qnw\") pod \"dnsmasq-dns-847c4cc679-wd8ph\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.953120 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.958836 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-config-data\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.962799 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-scripts\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.967353 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mkhfs"] Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.973073 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-db-sync-config-data\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.987917 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:47 crc kubenswrapper[4975]: I0122 10:55:47.994200 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmblj\" (UniqueName: \"kubernetes.io/projected/8d2af291-1e93-47b6-bff9-14bba4d9d553-kube-api-access-rmblj\") pod \"horizon-cf7789c7-gvvgh\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.000626 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnl69\" (UniqueName: \"kubernetes.io/projected/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-kube-api-access-qnl69\") pod \"cinder-db-sync-x24jb\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.007596 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nzqbb" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.007747 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.007766 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008794 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008835 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-scripts\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008853 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rdw\" (UniqueName: \"kubernetes.io/projected/420d2506-62c2-4217-bab2-0ea643c9d8cb-kube-api-access-76rdw\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008873 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008895 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-config-data\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008923 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.008945 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.027709 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k2tzk" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.028177 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x24jb" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.041537 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mkhfs"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.047205 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.082685 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59bb6cb8b5-l9vts"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.087237 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111793 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09fa03d-3235-4032-bebb-2f4b21546f90-horizon-secret-key\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111844 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-combined-ca-bundle\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111866 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111910 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-scripts\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111928 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rdw\" (UniqueName: \"kubernetes.io/projected/420d2506-62c2-4217-bab2-0ea643c9d8cb-kube-api-access-76rdw\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111956 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.111981 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-config-data\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112011 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112030 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-config-data\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112052 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112067 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-scripts\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112086 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-775pm\" (UniqueName: \"kubernetes.io/projected/590caacf-c377-4aed-abad-b927b28f51f5-kube-api-access-775pm\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112159 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgz66\" (UniqueName: \"kubernetes.io/projected/c09fa03d-3235-4032-bebb-2f4b21546f90-kube-api-access-vgz66\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112214 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-config\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112244 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09fa03d-3235-4032-bebb-2f4b21546f90-logs\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112528 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.112763 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.129853 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.133625 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.143234 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-97z49"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.144243 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.145022 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dr2s2" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.145128 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.146517 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-scripts\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.149715 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-config-data\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.151371 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ddvc7" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.151551 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.163091 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.177510 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59bb6cb8b5-l9vts"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.201536 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-97z49"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217210 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-config-data\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217252 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-scripts\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217276 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-775pm\" (UniqueName: \"kubernetes.io/projected/590caacf-c377-4aed-abad-b927b28f51f5-kube-api-access-775pm\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217296 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgz66\" (UniqueName: \"kubernetes.io/projected/c09fa03d-3235-4032-bebb-2f4b21546f90-kube-api-access-vgz66\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217315 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-config\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217346 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09fa03d-3235-4032-bebb-2f4b21546f90-logs\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217387 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09fa03d-3235-4032-bebb-2f4b21546f90-horizon-secret-key\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.217412 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-combined-ca-bundle\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.224639 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-config-data\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.225050 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-scripts\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.232554 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09fa03d-3235-4032-bebb-2f4b21546f90-logs\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.241201 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09fa03d-3235-4032-bebb-2f4b21546f90-horizon-secret-key\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.242610 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rdw\" (UniqueName: \"kubernetes.io/projected/420d2506-62c2-4217-bab2-0ea643c9d8cb-kube-api-access-76rdw\") pod \"ceilometer-0\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.242647 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-combined-ca-bundle\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.251746 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-config\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.262304 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-775pm\" (UniqueName: \"kubernetes.io/projected/590caacf-c377-4aed-abad-b927b28f51f5-kube-api-access-775pm\") pod \"neutron-db-sync-mkhfs\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.262459 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wd8ph"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.274692 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgz66\" (UniqueName: \"kubernetes.io/projected/c09fa03d-3235-4032-bebb-2f4b21546f90-kube-api-access-vgz66\") pod \"horizon-59bb6cb8b5-l9vts\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.281674 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.283025 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.287075 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.287469 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.287658 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pqwbk" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.287804 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.316584 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hr72x"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.319157 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.319597 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7tx\" (UniqueName: \"kubernetes.io/projected/abbc90ac-2417-470d-84cf-ddc823f23e50-kube-api-access-xx7tx\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.332742 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.332936 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.333098 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b26c7" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.334288 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hr72x"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.319701 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-combined-ca-bundle\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.335662 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-db-sync-config-data\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.363803 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.374706 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.393786 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.394218 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4scm"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.395792 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436632 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-config-data\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436676 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-combined-ca-bundle\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436698 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436716 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-scripts\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436740 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-logs\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436759 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436785 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-combined-ca-bundle\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436830 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-db-sync-config-data\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436850 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18942582-238b-4674-831a-d1a2284c912c-logs\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436868 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2d5\" (UniqueName: \"kubernetes.io/projected/18942582-238b-4674-831a-d1a2284c912c-kube-api-access-bt2d5\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436882 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436902 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx7tx\" (UniqueName: \"kubernetes.io/projected/abbc90ac-2417-470d-84cf-ddc823f23e50-kube-api-access-xx7tx\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436917 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436938 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436958 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpqx\" (UniqueName: \"kubernetes.io/projected/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-kube-api-access-tjpqx\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.436978 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.442673 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-combined-ca-bundle\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.442722 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4scm"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.444363 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.463372 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-db-sync-config-data\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.471023 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.473274 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.477809 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.477988 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.480078 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.481187 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx7tx\" (UniqueName: \"kubernetes.io/projected/abbc90ac-2417-470d-84cf-ddc823f23e50-kube-api-access-xx7tx\") pod \"barbican-db-sync-97z49\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538255 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18942582-238b-4674-831a-d1a2284c912c-logs\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538300 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2d5\" (UniqueName: \"kubernetes.io/projected/18942582-238b-4674-831a-d1a2284c912c-kube-api-access-bt2d5\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538318 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538355 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538392 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538414 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpqx\" (UniqueName: \"kubernetes.io/projected/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-kube-api-access-tjpqx\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538437 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538478 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538499 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-config-data\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538522 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-combined-ca-bundle\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538552 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538574 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-scripts\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538600 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538617 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-logs\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538632 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538652 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538674 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-config\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538698 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.538739 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b2kj\" (UniqueName: \"kubernetes.io/projected/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-kube-api-access-9b2kj\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.540715 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.541321 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18942582-238b-4674-831a-d1a2284c912c-logs\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.542025 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-logs\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.543248 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.546535 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-config-data\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.547187 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.564097 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.564529 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.564801 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-combined-ca-bundle\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.566756 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.566973 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-scripts\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.574499 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpqx\" (UniqueName: \"kubernetes.io/projected/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-kube-api-access-tjpqx\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.609949 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2d5\" (UniqueName: \"kubernetes.io/projected/18942582-238b-4674-831a-d1a2284c912c-kube-api-access-bt2d5\") pod \"placement-db-sync-hr72x\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.613739 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.636115 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.643262 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.643359 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hwx\" (UniqueName: \"kubernetes.io/projected/4de690d1-c33c-4eb1-bb34-f2702c0943df-kube-api-access-r4hwx\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644461 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b2kj\" (UniqueName: \"kubernetes.io/projected/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-kube-api-access-9b2kj\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644526 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644562 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644690 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644708 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644731 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644749 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644804 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-logs\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644839 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644885 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.644923 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-config\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.655846 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hr72x" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.669367 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.669717 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-config\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.670476 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.670758 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.675731 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b2kj\" (UniqueName: \"kubernetes.io/projected/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-kube-api-access-9b2kj\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.677681 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v4scm\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.741533 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745783 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745844 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4hwx\" (UniqueName: \"kubernetes.io/projected/4de690d1-c33c-4eb1-bb34-f2702c0943df-kube-api-access-r4hwx\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745890 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745910 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745964 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745982 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.745997 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.746026 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-logs\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.746423 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-logs\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.746959 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.747533 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.755979 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.762377 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.762667 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.779627 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-97z49" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.779702 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.783446 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4hwx\" (UniqueName: \"kubernetes.io/projected/4de690d1-c33c-4eb1-bb34-f2702c0943df-kube-api-access-r4hwx\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.817854 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.837819 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wd8ph"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.874240 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x24jb"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.977924 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nhtvn"] Jan 22 10:55:48 crc kubenswrapper[4975]: I0122 10:55:48.990233 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 10:55:49 crc kubenswrapper[4975]: I0122 10:55:49.040556 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cf7789c7-gvvgh"] Jan 22 10:55:49 crc kubenswrapper[4975]: I0122 10:55:49.047634 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:55:49 crc kubenswrapper[4975]: W0122 10:55:49.059483 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-48d4baefe552b0defdaed3653129918a76b3853e23c8fc1a098d7ec2f3f70582 WatchSource:0}: Error finding container 48d4baefe552b0defdaed3653129918a76b3853e23c8fc1a098d7ec2f3f70582: Status 404 returned error can't find the container with id 48d4baefe552b0defdaed3653129918a76b3853e23c8fc1a098d7ec2f3f70582 Jan 22 10:55:49 crc kubenswrapper[4975]: I0122 10:55:49.067635 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mkhfs"] Jan 22 10:55:49 crc kubenswrapper[4975]: I0122 10:55:49.115069 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.258347 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59bb6cb8b5-l9vts"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.270130 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhtvn" event={"ID":"d1484373-df23-4080-b6e8-9afc0ab6ec91","Type":"ContainerStarted","Data":"a32f2da6007acfc6594da7a7527f3ced6950bdb96d1b93bb33280ee29f9dec37"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.270164 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhtvn" event={"ID":"d1484373-df23-4080-b6e8-9afc0ab6ec91","Type":"ContainerStarted","Data":"2c000a2b7fd1eec0b0d376aa1726ef10d2facf71191c9cb4c9553358eb9de505"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.276826 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cf7789c7-gvvgh" event={"ID":"8d2af291-1e93-47b6-bff9-14bba4d9d553","Type":"ContainerStarted","Data":"273ecd4f29d3ac4a5594276f3518bc61efca9ad5faa3af0f9b5f2fd0690616d1"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.278297 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x24jb" event={"ID":"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f","Type":"ContainerStarted","Data":"b9142cac76f6745609a714d1e015b5c611624dc75b88fc27b0bbb6b42d0ce9bd"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.284863 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerStarted","Data":"48d4baefe552b0defdaed3653129918a76b3853e23c8fc1a098d7ec2f3f70582"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.290453 4975 generic.go:334] "Generic (PLEG): container finished" podID="495c93ec-ddc9-4538-9e76-0adfe2cb255d" containerID="ad31927a30a0c554ce7e1d4d75418b52b69817a23ba58605dbbd3268a8a51174" exitCode=0 Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.290619 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" event={"ID":"495c93ec-ddc9-4538-9e76-0adfe2cb255d","Type":"ContainerDied","Data":"ad31927a30a0c554ce7e1d4d75418b52b69817a23ba58605dbbd3268a8a51174"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.290645 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" event={"ID":"495c93ec-ddc9-4538-9e76-0adfe2cb255d","Type":"ContainerStarted","Data":"5e583d0eb0103e7a00488779936bba9a89f9cc52ff72e8e11b86ce856433fa52"} Jan 22 10:55:50 crc kubenswrapper[4975]: W0122 10:55:49.294400 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc09fa03d_3235_4032_bebb_2f4b21546f90.slice/crio-cb1ce549d9b5308c3cdc4d28114ea5ff7a2bad7b67c62f55ecac754160919855 WatchSource:0}: Error finding container cb1ce549d9b5308c3cdc4d28114ea5ff7a2bad7b67c62f55ecac754160919855: Status 404 returned error can't find the container with id cb1ce549d9b5308c3cdc4d28114ea5ff7a2bad7b67c62f55ecac754160919855 Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.294621 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nhtvn" podStartSLOduration=2.294340461 podStartE2EDuration="2.294340461s" podCreationTimestamp="2026-01-22 10:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:49.290179349 +0000 UTC m=+1141.948215459" watchObservedRunningTime="2026-01-22 10:55:49.294340461 +0000 UTC m=+1141.952376571" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.296183 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mkhfs" event={"ID":"590caacf-c377-4aed-abad-b927b28f51f5","Type":"ContainerStarted","Data":"cc0cb0e6faf6bf07454f0d829797f13ef86cf29b210a6a8f2152c767a1248089"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.330404 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hr72x"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.334325 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mkhfs" podStartSLOduration=2.33429362 podStartE2EDuration="2.33429362s" podCreationTimestamp="2026-01-22 10:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:49.321749995 +0000 UTC m=+1141.979786105" watchObservedRunningTime="2026-01-22 10:55:49.33429362 +0000 UTC m=+1141.992329730" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.826337 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.867433 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59bb6cb8b5-l9vts"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.945436 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.957490 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b986d67d5-jkmkh"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.958840 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:49.975300 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b986d67d5-jkmkh"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.075160 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a762d84-4b11-4b4b-91e1-51e43be6ff86-horizon-secret-key\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.075197 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-config-data\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.075236 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-scripts\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.077103 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a762d84-4b11-4b4b-91e1-51e43be6ff86-logs\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.077218 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f699\" (UniqueName: \"kubernetes.io/projected/2a762d84-4b11-4b4b-91e1-51e43be6ff86-kube-api-access-4f699\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.182171 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f699\" (UniqueName: \"kubernetes.io/projected/2a762d84-4b11-4b4b-91e1-51e43be6ff86-kube-api-access-4f699\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.182639 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a762d84-4b11-4b4b-91e1-51e43be6ff86-horizon-secret-key\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.182671 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-config-data\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.182738 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-scripts\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.182843 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a762d84-4b11-4b4b-91e1-51e43be6ff86-logs\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.183316 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a762d84-4b11-4b4b-91e1-51e43be6ff86-logs\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.184273 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-scripts\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.184797 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-config-data\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.191891 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a762d84-4b11-4b4b-91e1-51e43be6ff86-horizon-secret-key\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.203525 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f699\" (UniqueName: \"kubernetes.io/projected/2a762d84-4b11-4b4b-91e1-51e43be6ff86-kube-api-access-4f699\") pod \"horizon-b986d67d5-jkmkh\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.293712 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.353100 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.369506 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-97z49"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.388935 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4scm"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.391462 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59bb6cb8b5-l9vts" event={"ID":"c09fa03d-3235-4032-bebb-2f4b21546f90","Type":"ContainerStarted","Data":"cb1ce549d9b5308c3cdc4d28114ea5ff7a2bad7b67c62f55ecac754160919855"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.396710 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hr72x" event={"ID":"18942582-238b-4674-831a-d1a2284c912c","Type":"ContainerStarted","Data":"7166bcea79a914fc51d5696903e56ca980959d9c2079dfaa22f050c3e0bf4b58"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.404240 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mkhfs" event={"ID":"590caacf-c377-4aed-abad-b927b28f51f5","Type":"ContainerStarted","Data":"68c44397d85078ceb1f615b35b0f4489644aee354873ff41d24b5c9c40d7d2a8"} Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.455157 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.549853 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.659614 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:50 crc kubenswrapper[4975]: W0122 10:55:50.672442 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4de690d1_c33c_4eb1_bb34_f2702c0943df.slice/crio-2b0e85810dad825546bef09cf08b339b777a4446d66b9bfbdbb74b2a14acdbe3 WatchSource:0}: Error finding container 2b0e85810dad825546bef09cf08b339b777a4446d66b9bfbdbb74b2a14acdbe3: Status 404 returned error can't find the container with id 2b0e85810dad825546bef09cf08b339b777a4446d66b9bfbdbb74b2a14acdbe3 Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.709073 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9qnw\" (UniqueName: \"kubernetes.io/projected/495c93ec-ddc9-4538-9e76-0adfe2cb255d-kube-api-access-v9qnw\") pod \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.709134 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-nb\") pod \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.709223 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-swift-storage-0\") pod \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.709244 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-config\") pod \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.709269 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-svc\") pod \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.709340 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-sb\") pod \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\" (UID: \"495c93ec-ddc9-4538-9e76-0adfe2cb255d\") " Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.722540 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495c93ec-ddc9-4538-9e76-0adfe2cb255d-kube-api-access-v9qnw" (OuterVolumeSpecName: "kube-api-access-v9qnw") pod "495c93ec-ddc9-4538-9e76-0adfe2cb255d" (UID: "495c93ec-ddc9-4538-9e76-0adfe2cb255d"). InnerVolumeSpecName "kube-api-access-v9qnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.737524 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "495c93ec-ddc9-4538-9e76-0adfe2cb255d" (UID: "495c93ec-ddc9-4538-9e76-0adfe2cb255d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.744828 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "495c93ec-ddc9-4538-9e76-0adfe2cb255d" (UID: "495c93ec-ddc9-4538-9e76-0adfe2cb255d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.753187 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "495c93ec-ddc9-4538-9e76-0adfe2cb255d" (UID: "495c93ec-ddc9-4538-9e76-0adfe2cb255d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.758709 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-config" (OuterVolumeSpecName: "config") pod "495c93ec-ddc9-4538-9e76-0adfe2cb255d" (UID: "495c93ec-ddc9-4538-9e76-0adfe2cb255d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.766870 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "495c93ec-ddc9-4538-9e76-0adfe2cb255d" (UID: "495c93ec-ddc9-4538-9e76-0adfe2cb255d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.811732 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9qnw\" (UniqueName: \"kubernetes.io/projected/495c93ec-ddc9-4538-9e76-0adfe2cb255d-kube-api-access-v9qnw\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.811764 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.811773 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.811781 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.811790 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.811798 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c93ec-ddc9-4538-9e76-0adfe2cb255d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:50 crc kubenswrapper[4975]: I0122 10:55:50.922115 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b986d67d5-jkmkh"] Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.416391 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b986d67d5-jkmkh" event={"ID":"2a762d84-4b11-4b4b-91e1-51e43be6ff86","Type":"ContainerStarted","Data":"b56162b1c92adaaf0b0fdd13e949146b5b7ebfd3a9f5d67d0664a4a71eff7b5d"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.417966 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-97z49" event={"ID":"abbc90ac-2417-470d-84cf-ddc823f23e50","Type":"ContainerStarted","Data":"c353cb2d71877a9d96b5161f420e4b76cc23ffe9166de5646e9c65bcee6a9421"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.419775 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4de690d1-c33c-4eb1-bb34-f2702c0943df","Type":"ContainerStarted","Data":"2b0e85810dad825546bef09cf08b339b777a4446d66b9bfbdbb74b2a14acdbe3"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.420912 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01a4284e-2a4a-44fe-9c6f-644b99abb5f1","Type":"ContainerStarted","Data":"a887d5a5dd576ffcf87690f64bd900f561882a6404800e1c2d804feffc271e44"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.424866 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.424883 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-wd8ph" event={"ID":"495c93ec-ddc9-4538-9e76-0adfe2cb255d","Type":"ContainerDied","Data":"5e583d0eb0103e7a00488779936bba9a89f9cc52ff72e8e11b86ce856433fa52"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.424947 4975 scope.go:117] "RemoveContainer" containerID="ad31927a30a0c554ce7e1d4d75418b52b69817a23ba58605dbbd3268a8a51174" Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.429524 4975 generic.go:334] "Generic (PLEG): container finished" podID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerID="bf733f359c0b2cf97aa4f5fde8d1157f7863448da61fa4471c124f7f404566a7" exitCode=0 Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.429584 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" event={"ID":"090fc6ba-2ecd-4dbd-8e2c-86112df32aef","Type":"ContainerDied","Data":"bf733f359c0b2cf97aa4f5fde8d1157f7863448da61fa4471c124f7f404566a7"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.429608 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" event={"ID":"090fc6ba-2ecd-4dbd-8e2c-86112df32aef","Type":"ContainerStarted","Data":"9e0ea78df671e36810356f60b5d6b014fa61dd78c9fbd1e26c9e3f554f2b4509"} Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.637429 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wd8ph"] Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.644159 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wd8ph"] Jan 22 10:55:51 crc kubenswrapper[4975]: I0122 10:55:51.767177 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495c93ec-ddc9-4538-9e76-0adfe2cb255d" path="/var/lib/kubelet/pods/495c93ec-ddc9-4538-9e76-0adfe2cb255d/volumes" Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.440109 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4de690d1-c33c-4eb1-bb34-f2702c0943df","Type":"ContainerStarted","Data":"5caadb3ef508515d029b236e550999b8d262caef72c83f73af1081c289d9d348"} Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.440408 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4de690d1-c33c-4eb1-bb34-f2702c0943df","Type":"ContainerStarted","Data":"fe281ef269f88cf346dee77c293eede88b9a4e6e82d03708bccc8778c529e6f6"} Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.440241 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-httpd" containerID="cri-o://5caadb3ef508515d029b236e550999b8d262caef72c83f73af1081c289d9d348" gracePeriod=30 Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.440202 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-log" containerID="cri-o://fe281ef269f88cf346dee77c293eede88b9a4e6e82d03708bccc8778c529e6f6" gracePeriod=30 Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.452392 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01a4284e-2a4a-44fe-9c6f-644b99abb5f1","Type":"ContainerStarted","Data":"eb94da38f2d7f03ab1a057ab93aa2286bd8308fb0645db007a5dc0b980f81d2f"} Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.452436 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01a4284e-2a4a-44fe-9c6f-644b99abb5f1","Type":"ContainerStarted","Data":"cfc2c9d17bd2dfb21be47a27a979f7b464595dcbae25fd440b5c65a3e0adaea2"} Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.452566 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-log" containerID="cri-o://cfc2c9d17bd2dfb21be47a27a979f7b464595dcbae25fd440b5c65a3e0adaea2" gracePeriod=30 Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.452809 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-httpd" containerID="cri-o://eb94da38f2d7f03ab1a057ab93aa2286bd8308fb0645db007a5dc0b980f81d2f" gracePeriod=30 Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.465138 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.465120303 podStartE2EDuration="4.465120303s" podCreationTimestamp="2026-01-22 10:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:52.458536252 +0000 UTC m=+1145.116572372" watchObservedRunningTime="2026-01-22 10:55:52.465120303 +0000 UTC m=+1145.123156413" Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.465408 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" event={"ID":"090fc6ba-2ecd-4dbd-8e2c-86112df32aef","Type":"ContainerStarted","Data":"39981e88845908fcd9acf9f663a562581c50ff8fd349f9eee6e7ab9fea527b29"} Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.467495 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.493593 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.493573513 podStartE2EDuration="4.493573513s" podCreationTimestamp="2026-01-22 10:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:52.484631475 +0000 UTC m=+1145.142667605" watchObservedRunningTime="2026-01-22 10:55:52.493573513 +0000 UTC m=+1145.151609633" Jan 22 10:55:52 crc kubenswrapper[4975]: I0122 10:55:52.501683 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" podStartSLOduration=4.50166413 podStartE2EDuration="4.50166413s" podCreationTimestamp="2026-01-22 10:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:55:52.501562167 +0000 UTC m=+1145.159598277" watchObservedRunningTime="2026-01-22 10:55:52.50166413 +0000 UTC m=+1145.159700240" Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.494204 4975 generic.go:334] "Generic (PLEG): container finished" podID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerID="5caadb3ef508515d029b236e550999b8d262caef72c83f73af1081c289d9d348" exitCode=143 Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.494253 4975 generic.go:334] "Generic (PLEG): container finished" podID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerID="fe281ef269f88cf346dee77c293eede88b9a4e6e82d03708bccc8778c529e6f6" exitCode=143 Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.494281 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4de690d1-c33c-4eb1-bb34-f2702c0943df","Type":"ContainerDied","Data":"5caadb3ef508515d029b236e550999b8d262caef72c83f73af1081c289d9d348"} Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.494342 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4de690d1-c33c-4eb1-bb34-f2702c0943df","Type":"ContainerDied","Data":"fe281ef269f88cf346dee77c293eede88b9a4e6e82d03708bccc8778c529e6f6"} Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.512122 4975 generic.go:334] "Generic (PLEG): container finished" podID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerID="eb94da38f2d7f03ab1a057ab93aa2286bd8308fb0645db007a5dc0b980f81d2f" exitCode=143 Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.512174 4975 generic.go:334] "Generic (PLEG): container finished" podID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerID="cfc2c9d17bd2dfb21be47a27a979f7b464595dcbae25fd440b5c65a3e0adaea2" exitCode=143 Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.512212 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01a4284e-2a4a-44fe-9c6f-644b99abb5f1","Type":"ContainerDied","Data":"eb94da38f2d7f03ab1a057ab93aa2286bd8308fb0645db007a5dc0b980f81d2f"} Jan 22 10:55:53 crc kubenswrapper[4975]: I0122 10:55:53.512291 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01a4284e-2a4a-44fe-9c6f-644b99abb5f1","Type":"ContainerDied","Data":"cfc2c9d17bd2dfb21be47a27a979f7b464595dcbae25fd440b5c65a3e0adaea2"} Jan 22 10:55:54 crc kubenswrapper[4975]: I0122 10:55:54.522397 4975 generic.go:334] "Generic (PLEG): container finished" podID="d1484373-df23-4080-b6e8-9afc0ab6ec91" containerID="a32f2da6007acfc6594da7a7527f3ced6950bdb96d1b93bb33280ee29f9dec37" exitCode=0 Jan 22 10:55:54 crc kubenswrapper[4975]: I0122 10:55:54.522441 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhtvn" event={"ID":"d1484373-df23-4080-b6e8-9afc0ab6ec91","Type":"ContainerDied","Data":"a32f2da6007acfc6594da7a7527f3ced6950bdb96d1b93bb33280ee29f9dec37"} Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.275420 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cf7789c7-gvvgh"] Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.323362 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bbdd45bdd-ckdgl"] Jan 22 10:55:56 crc kubenswrapper[4975]: E0122 10:55:56.323737 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495c93ec-ddc9-4538-9e76-0adfe2cb255d" containerName="init" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.323753 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="495c93ec-ddc9-4538-9e76-0adfe2cb255d" containerName="init" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.323890 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="495c93ec-ddc9-4538-9e76-0adfe2cb255d" containerName="init" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.324809 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.328608 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.347535 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bbdd45bdd-ckdgl"] Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.382320 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b986d67d5-jkmkh"] Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.415710 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cb95f4cc6-jdv4m"] Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.417032 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426530 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-config-data\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426603 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-scripts\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426641 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5d94b3-48d8-485a-b307-9a011d77ee76-logs\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426657 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvdz\" (UniqueName: \"kubernetes.io/projected/fc5d94b3-48d8-485a-b307-9a011d77ee76-kube-api-access-hlvdz\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426704 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-tls-certs\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426726 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-combined-ca-bundle\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.426756 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-secret-key\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.443963 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cb95f4cc6-jdv4m"] Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.527934 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-scripts\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.527996 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-config-data\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528041 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5d94b3-48d8-485a-b307-9a011d77ee76-logs\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528064 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvdz\" (UniqueName: \"kubernetes.io/projected/fc5d94b3-48d8-485a-b307-9a011d77ee76-kube-api-access-hlvdz\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528127 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-tls-certs\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528158 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-combined-ca-bundle\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528186 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-scripts\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528209 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5ss\" (UniqueName: \"kubernetes.io/projected/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-kube-api-access-kg5ss\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528232 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-horizon-tls-certs\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528265 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-secret-key\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528343 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-logs\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528372 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-config-data\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528403 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-combined-ca-bundle\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.528430 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-horizon-secret-key\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.529235 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-scripts\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.529583 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5d94b3-48d8-485a-b307-9a011d77ee76-logs\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.531737 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-config-data\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.551317 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-secret-key\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.551391 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-tls-certs\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.553775 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-combined-ca-bundle\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.555055 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvdz\" (UniqueName: \"kubernetes.io/projected/fc5d94b3-48d8-485a-b307-9a011d77ee76-kube-api-access-hlvdz\") pod \"horizon-5bbdd45bdd-ckdgl\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630039 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-scripts\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630406 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg5ss\" (UniqueName: \"kubernetes.io/projected/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-kube-api-access-kg5ss\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630429 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-horizon-tls-certs\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630482 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-logs\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630513 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-combined-ca-bundle\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630532 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-horizon-secret-key\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.630563 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-config-data\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.631518 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-scripts\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.631887 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-config-data\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.632035 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-logs\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.634109 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-horizon-secret-key\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.635724 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-horizon-tls-certs\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.639019 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-combined-ca-bundle\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.650048 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg5ss\" (UniqueName: \"kubernetes.io/projected/d2027dc6-aa67-467c-ad8d-e31ccb6d66ee-kube-api-access-kg5ss\") pod \"horizon-7cb95f4cc6-jdv4m\" (UID: \"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee\") " pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.652660 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:55:56 crc kubenswrapper[4975]: I0122 10:55:56.744160 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.436461 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.436545 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.664013 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.669371 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.756024 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-config-data\") pod \"d1484373-df23-4080-b6e8-9afc0ab6ec91\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.756092 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-combined-ca-bundle\") pod \"d1484373-df23-4080-b6e8-9afc0ab6ec91\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.756151 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-scripts\") pod \"d1484373-df23-4080-b6e8-9afc0ab6ec91\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.756240 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-credential-keys\") pod \"d1484373-df23-4080-b6e8-9afc0ab6ec91\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.756296 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-fernet-keys\") pod \"d1484373-df23-4080-b6e8-9afc0ab6ec91\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.756457 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhjgg\" (UniqueName: \"kubernetes.io/projected/d1484373-df23-4080-b6e8-9afc0ab6ec91-kube-api-access-dhjgg\") pod \"d1484373-df23-4080-b6e8-9afc0ab6ec91\" (UID: \"d1484373-df23-4080-b6e8-9afc0ab6ec91\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.761778 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1484373-df23-4080-b6e8-9afc0ab6ec91-kube-api-access-dhjgg" (OuterVolumeSpecName: "kube-api-access-dhjgg") pod "d1484373-df23-4080-b6e8-9afc0ab6ec91" (UID: "d1484373-df23-4080-b6e8-9afc0ab6ec91"). InnerVolumeSpecName "kube-api-access-dhjgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.762557 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d1484373-df23-4080-b6e8-9afc0ab6ec91" (UID: "d1484373-df23-4080-b6e8-9afc0ab6ec91"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.762771 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-scripts" (OuterVolumeSpecName: "scripts") pod "d1484373-df23-4080-b6e8-9afc0ab6ec91" (UID: "d1484373-df23-4080-b6e8-9afc0ab6ec91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.776866 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d1484373-df23-4080-b6e8-9afc0ab6ec91" (UID: "d1484373-df23-4080-b6e8-9afc0ab6ec91"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.795732 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-config-data" (OuterVolumeSpecName: "config-data") pod "d1484373-df23-4080-b6e8-9afc0ab6ec91" (UID: "d1484373-df23-4080-b6e8-9afc0ab6ec91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.806660 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1484373-df23-4080-b6e8-9afc0ab6ec91" (UID: "d1484373-df23-4080-b6e8-9afc0ab6ec91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.858942 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4hwx\" (UniqueName: \"kubernetes.io/projected/4de690d1-c33c-4eb1-bb34-f2702c0943df-kube-api-access-r4hwx\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859113 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-logs\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859156 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-config-data\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859201 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-httpd-run\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859263 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-scripts\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859298 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-combined-ca-bundle\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859448 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-internal-tls-certs\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859480 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4de690d1-c33c-4eb1-bb34-f2702c0943df\" (UID: \"4de690d1-c33c-4eb1-bb34-f2702c0943df\") " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859791 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859923 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859946 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859959 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859974 4975 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859986 4975 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.859997 4975 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1484373-df23-4080-b6e8-9afc0ab6ec91-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.860010 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhjgg\" (UniqueName: \"kubernetes.io/projected/d1484373-df23-4080-b6e8-9afc0ab6ec91-kube-api-access-dhjgg\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.860145 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-logs" (OuterVolumeSpecName: "logs") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.867697 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de690d1-c33c-4eb1-bb34-f2702c0943df-kube-api-access-r4hwx" (OuterVolumeSpecName: "kube-api-access-r4hwx") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "kube-api-access-r4hwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.867973 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-scripts" (OuterVolumeSpecName: "scripts") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.872517 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.887851 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.929972 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.935803 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-config-data" (OuterVolumeSpecName: "config-data") pod "4de690d1-c33c-4eb1-bb34-f2702c0943df" (UID: "4de690d1-c33c-4eb1-bb34-f2702c0943df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.962429 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.962725 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.962792 4975 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.962965 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.963109 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4hwx\" (UniqueName: \"kubernetes.io/projected/4de690d1-c33c-4eb1-bb34-f2702c0943df-kube-api-access-r4hwx\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.963180 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de690d1-c33c-4eb1-bb34-f2702c0943df-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.963241 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de690d1-c33c-4eb1-bb34-f2702c0943df-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:57 crc kubenswrapper[4975]: I0122 10:55:57.985787 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.065245 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.590637 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4de690d1-c33c-4eb1-bb34-f2702c0943df","Type":"ContainerDied","Data":"2b0e85810dad825546bef09cf08b339b777a4446d66b9bfbdbb74b2a14acdbe3"} Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.590687 4975 scope.go:117] "RemoveContainer" containerID="5caadb3ef508515d029b236e550999b8d262caef72c83f73af1081c289d9d348" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.590699 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.593822 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhtvn" event={"ID":"d1484373-df23-4080-b6e8-9afc0ab6ec91","Type":"ContainerDied","Data":"2c000a2b7fd1eec0b0d376aa1726ef10d2facf71191c9cb4c9553358eb9de505"} Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.593846 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c000a2b7fd1eec0b0d376aa1726ef10d2facf71191c9cb4c9553358eb9de505" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.593918 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhtvn" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.645272 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.661494 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.666769 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:58 crc kubenswrapper[4975]: E0122 10:55:58.667108 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-log" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.667124 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-log" Jan 22 10:55:58 crc kubenswrapper[4975]: E0122 10:55:58.667134 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-httpd" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.667141 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-httpd" Jan 22 10:55:58 crc kubenswrapper[4975]: E0122 10:55:58.667162 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1484373-df23-4080-b6e8-9afc0ab6ec91" containerName="keystone-bootstrap" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.667168 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1484373-df23-4080-b6e8-9afc0ab6ec91" containerName="keystone-bootstrap" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.667315 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-httpd" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.667358 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" containerName="glance-log" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.667368 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1484373-df23-4080-b6e8-9afc0ab6ec91" containerName="keystone-bootstrap" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.668162 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.670472 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.672494 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.685905 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.746548 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.757423 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nhtvn"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.764273 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nhtvn"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781183 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781243 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781277 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781306 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9fzv\" (UniqueName: \"kubernetes.io/projected/465645fc-a907-44d6-8d2b-30e29b40187b-kube-api-access-k9fzv\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781345 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781367 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-logs\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781397 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.781439 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.809145 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zl6sk"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.809421 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" containerID="cri-o://751f5849adc66c64e151d777eb035ac1de0e1d14f0c21f2523604318d4638867" gracePeriod=10 Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.855271 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n7gh8"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.856230 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.861440 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.861759 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dr2s2" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.862108 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.862247 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.862374 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.874557 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n7gh8"] Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882554 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882609 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882657 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9fzv\" (UniqueName: \"kubernetes.io/projected/465645fc-a907-44d6-8d2b-30e29b40187b-kube-api-access-k9fzv\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882692 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882719 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-logs\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882751 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.882788 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.883213 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-logs\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.883441 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.883876 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.884112 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.887569 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.888084 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.888559 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.907056 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.913748 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.914478 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9fzv\" (UniqueName: \"kubernetes.io/projected/465645fc-a907-44d6-8d2b-30e29b40187b-kube-api-access-k9fzv\") pod \"glance-default-internal-api-0\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.986166 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk42s\" (UniqueName: \"kubernetes.io/projected/7594ce51-7b3b-42e1-a47c-791edf907087-kube-api-access-lk42s\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.986222 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-config-data\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.986271 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-fernet-keys\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.986491 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-credential-keys\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.986651 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-combined-ca-bundle\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:58 crc kubenswrapper[4975]: I0122 10:55:58.986705 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-scripts\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.004189 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.088846 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-credential-keys\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.089098 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-combined-ca-bundle\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.089123 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-scripts\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.089175 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk42s\" (UniqueName: \"kubernetes.io/projected/7594ce51-7b3b-42e1-a47c-791edf907087-kube-api-access-lk42s\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.089204 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-config-data\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.089249 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-fernet-keys\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.093712 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-scripts\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.094077 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-combined-ca-bundle\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.094091 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-fernet-keys\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.095807 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-credential-keys\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.100572 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-config-data\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.111866 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk42s\" (UniqueName: \"kubernetes.io/projected/7594ce51-7b3b-42e1-a47c-791edf907087-kube-api-access-lk42s\") pod \"keystone-bootstrap-n7gh8\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.183368 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.622817 4975 generic.go:334] "Generic (PLEG): container finished" podID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerID="751f5849adc66c64e151d777eb035ac1de0e1d14f0c21f2523604318d4638867" exitCode=0 Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.622866 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" event={"ID":"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1","Type":"ContainerDied","Data":"751f5849adc66c64e151d777eb035ac1de0e1d14f0c21f2523604318d4638867"} Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.767031 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de690d1-c33c-4eb1-bb34-f2702c0943df" path="/var/lib/kubelet/pods/4de690d1-c33c-4eb1-bb34-f2702c0943df/volumes" Jan 22 10:55:59 crc kubenswrapper[4975]: I0122 10:55:59.767689 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1484373-df23-4080-b6e8-9afc0ab6ec91" path="/var/lib/kubelet/pods/d1484373-df23-4080-b6e8-9afc0ab6ec91/volumes" Jan 22 10:56:02 crc kubenswrapper[4975]: I0122 10:56:02.709687 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 22 10:56:03 crc kubenswrapper[4975]: E0122 10:56:03.427452 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 22 10:56:03 crc kubenswrapper[4975]: E0122 10:56:03.428116 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h5cdh6ch6h588h5d9h569h56fh695hdbh97h645h5c7h5fch66bhdfh657h97h5d9h5bh5dh575h547h6bh596h668h66h68bh67dh55dh555h77q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgz66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59bb6cb8b5-l9vts_openstack(c09fa03d-3235-4032-bebb-2f4b21546f90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:56:03 crc kubenswrapper[4975]: E0122 10:56:03.452041 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59bb6cb8b5-l9vts" podUID="c09fa03d-3235-4032-bebb-2f4b21546f90" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.672345 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.672958 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bt2d5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-hr72x_openstack(18942582-238b-4674-831a-d1a2284c912c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.674231 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-hr72x" podUID="18942582-238b-4674-831a-d1a2284c912c" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.690423 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-hr72x" podUID="18942582-238b-4674-831a-d1a2284c912c" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.702389 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.702522 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h675h59chbh5ffh5dfh648h66fhdfh678h9dhfdh56ch5cch684h684h57dh85hb5h5c6h5c8h599hd7h5cdhbch64h64h67dhd6h58h5b8h7cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmblj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-cf7789c7-gvvgh_openstack(8d2af291-1e93-47b6-bff9-14bba4d9d553): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:56:07 crc kubenswrapper[4975]: E0122 10:56:07.704669 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-cf7789c7-gvvgh" podUID="8d2af291-1e93-47b6-bff9-14bba4d9d553" Jan 22 10:56:07 crc kubenswrapper[4975]: I0122 10:56:07.710285 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 22 10:56:12 crc kubenswrapper[4975]: I0122 10:56:12.709002 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 22 10:56:12 crc kubenswrapper[4975]: I0122 10:56:12.709713 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:56:17 crc kubenswrapper[4975]: I0122 10:56:17.708980 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 22 10:56:18 crc kubenswrapper[4975]: I0122 10:56:18.643672 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:56:18 crc kubenswrapper[4975]: I0122 10:56:18.644419 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.001784 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.027293 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-config-data\") pod \"c09fa03d-3235-4032-bebb-2f4b21546f90\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.027438 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09fa03d-3235-4032-bebb-2f4b21546f90-horizon-secret-key\") pod \"c09fa03d-3235-4032-bebb-2f4b21546f90\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.027580 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09fa03d-3235-4032-bebb-2f4b21546f90-logs\") pod \"c09fa03d-3235-4032-bebb-2f4b21546f90\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.027647 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgz66\" (UniqueName: \"kubernetes.io/projected/c09fa03d-3235-4032-bebb-2f4b21546f90-kube-api-access-vgz66\") pod \"c09fa03d-3235-4032-bebb-2f4b21546f90\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.027687 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-scripts\") pod \"c09fa03d-3235-4032-bebb-2f4b21546f90\" (UID: \"c09fa03d-3235-4032-bebb-2f4b21546f90\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.029052 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-scripts" (OuterVolumeSpecName: "scripts") pod "c09fa03d-3235-4032-bebb-2f4b21546f90" (UID: "c09fa03d-3235-4032-bebb-2f4b21546f90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.029486 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c09fa03d-3235-4032-bebb-2f4b21546f90-logs" (OuterVolumeSpecName: "logs") pod "c09fa03d-3235-4032-bebb-2f4b21546f90" (UID: "c09fa03d-3235-4032-bebb-2f4b21546f90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.032332 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-config-data" (OuterVolumeSpecName: "config-data") pod "c09fa03d-3235-4032-bebb-2f4b21546f90" (UID: "c09fa03d-3235-4032-bebb-2f4b21546f90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.034853 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09fa03d-3235-4032-bebb-2f4b21546f90-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c09fa03d-3235-4032-bebb-2f4b21546f90" (UID: "c09fa03d-3235-4032-bebb-2f4b21546f90"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.035118 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09fa03d-3235-4032-bebb-2f4b21546f90-kube-api-access-vgz66" (OuterVolumeSpecName: "kube-api-access-vgz66") pod "c09fa03d-3235-4032-bebb-2f4b21546f90" (UID: "c09fa03d-3235-4032-bebb-2f4b21546f90"). InnerVolumeSpecName "kube-api-access-vgz66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.132639 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.132970 4975 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c09fa03d-3235-4032-bebb-2f4b21546f90-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.132988 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09fa03d-3235-4032-bebb-2f4b21546f90-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.132997 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgz66\" (UniqueName: \"kubernetes.io/projected/c09fa03d-3235-4032-bebb-2f4b21546f90-kube-api-access-vgz66\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.133009 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c09fa03d-3235-4032-bebb-2f4b21546f90-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: E0122 10:56:22.622184 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 22 10:56:22 crc kubenswrapper[4975]: E0122 10:56:22.622744 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx7tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-97z49_openstack(abbc90ac-2417-470d-84cf-ddc823f23e50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:56:22 crc kubenswrapper[4975]: E0122 10:56:22.623994 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-97z49" podUID="abbc90ac-2417-470d-84cf-ddc823f23e50" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.682525 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.692136 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747606 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-scripts\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747722 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747776 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-scripts\") pod \"8d2af291-1e93-47b6-bff9-14bba4d9d553\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747820 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-public-tls-certs\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747854 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-combined-ca-bundle\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747897 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-config-data\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.747960 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjpqx\" (UniqueName: \"kubernetes.io/projected/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-kube-api-access-tjpqx\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.748041 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmblj\" (UniqueName: \"kubernetes.io/projected/8d2af291-1e93-47b6-bff9-14bba4d9d553-kube-api-access-rmblj\") pod \"8d2af291-1e93-47b6-bff9-14bba4d9d553\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.748078 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-logs\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.748124 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2af291-1e93-47b6-bff9-14bba4d9d553-logs\") pod \"8d2af291-1e93-47b6-bff9-14bba4d9d553\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.748265 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-httpd-run\") pod \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\" (UID: \"01a4284e-2a4a-44fe-9c6f-644b99abb5f1\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.748466 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-config-data\") pod \"8d2af291-1e93-47b6-bff9-14bba4d9d553\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.748900 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2af291-1e93-47b6-bff9-14bba4d9d553-logs" (OuterVolumeSpecName: "logs") pod "8d2af291-1e93-47b6-bff9-14bba4d9d553" (UID: "8d2af291-1e93-47b6-bff9-14bba4d9d553"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.749048 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2af291-1e93-47b6-bff9-14bba4d9d553-horizon-secret-key\") pod \"8d2af291-1e93-47b6-bff9-14bba4d9d553\" (UID: \"8d2af291-1e93-47b6-bff9-14bba4d9d553\") " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.749142 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-logs" (OuterVolumeSpecName: "logs") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.749755 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2af291-1e93-47b6-bff9-14bba4d9d553-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.749794 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.751834 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.751971 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-scripts" (OuterVolumeSpecName: "scripts") pod "8d2af291-1e93-47b6-bff9-14bba4d9d553" (UID: "8d2af291-1e93-47b6-bff9-14bba4d9d553"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.752056 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-config-data" (OuterVolumeSpecName: "config-data") pod "8d2af291-1e93-47b6-bff9-14bba4d9d553" (UID: "8d2af291-1e93-47b6-bff9-14bba4d9d553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.753759 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-scripts" (OuterVolumeSpecName: "scripts") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.768005 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-kube-api-access-tjpqx" (OuterVolumeSpecName: "kube-api-access-tjpqx") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "kube-api-access-tjpqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.771233 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2af291-1e93-47b6-bff9-14bba4d9d553-kube-api-access-rmblj" (OuterVolumeSpecName: "kube-api-access-rmblj") pod "8d2af291-1e93-47b6-bff9-14bba4d9d553" (UID: "8d2af291-1e93-47b6-bff9-14bba4d9d553"). InnerVolumeSpecName "kube-api-access-rmblj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.771297 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.771497 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2af291-1e93-47b6-bff9-14bba4d9d553-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d2af291-1e93-47b6-bff9-14bba4d9d553" (UID: "8d2af291-1e93-47b6-bff9-14bba4d9d553"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.798886 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.819627 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-config-data" (OuterVolumeSpecName: "config-data") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.843189 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "01a4284e-2a4a-44fe-9c6f-644b99abb5f1" (UID: "01a4284e-2a4a-44fe-9c6f-644b99abb5f1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.847758 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.847908 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01a4284e-2a4a-44fe-9c6f-644b99abb5f1","Type":"ContainerDied","Data":"a887d5a5dd576ffcf87690f64bd900f561882a6404800e1c2d804feffc271e44"} Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852227 4975 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852348 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852460 4975 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2af291-1e93-47b6-bff9-14bba4d9d553-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852532 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852611 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852686 4975 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852763 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2af291-1e93-47b6-bff9-14bba4d9d553-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852831 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852957 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.853024 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjpqx\" (UniqueName: \"kubernetes.io/projected/01a4284e-2a4a-44fe-9c6f-644b99abb5f1-kube-api-access-tjpqx\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.853077 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmblj\" (UniqueName: \"kubernetes.io/projected/8d2af291-1e93-47b6-bff9-14bba4d9d553-kube-api-access-rmblj\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852642 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59bb6cb8b5-l9vts" event={"ID":"c09fa03d-3235-4032-bebb-2f4b21546f90","Type":"ContainerDied","Data":"cb1ce549d9b5308c3cdc4d28114ea5ff7a2bad7b67c62f55ecac754160919855"} Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.852661 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59bb6cb8b5-l9vts" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.854348 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf7789c7-gvvgh" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.854377 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cf7789c7-gvvgh" event={"ID":"8d2af291-1e93-47b6-bff9-14bba4d9d553","Type":"ContainerDied","Data":"273ecd4f29d3ac4a5594276f3518bc61efca9ad5faa3af0f9b5f2fd0690616d1"} Jan 22 10:56:22 crc kubenswrapper[4975]: E0122 10:56:22.855326 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-97z49" podUID="abbc90ac-2417-470d-84cf-ddc823f23e50" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.885303 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.953883 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.982115 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59bb6cb8b5-l9vts"] Jan 22 10:56:22 crc kubenswrapper[4975]: I0122 10:56:22.988792 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59bb6cb8b5-l9vts"] Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.024022 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cf7789c7-gvvgh"] Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.034229 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cf7789c7-gvvgh"] Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.042177 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.050250 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.068944 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:56:23 crc kubenswrapper[4975]: E0122 10:56:23.069304 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-httpd" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.069321 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-httpd" Jan 22 10:56:23 crc kubenswrapper[4975]: E0122 10:56:23.069335 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-log" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.069342 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-log" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.069521 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-log" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.069549 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" containerName="glance-httpd" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.070453 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.073539 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.073766 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.076783 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157053 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157110 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/c2656030-bade-452d-a4cb-ba4c12935750-kube-api-access-76fj8\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157141 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157185 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-logs\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157401 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157666 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-scripts\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157762 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-config-data\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.157855 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259098 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-scripts\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259152 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-config-data\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259197 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259226 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259267 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/c2656030-bade-452d-a4cb-ba4c12935750-kube-api-access-76fj8\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259293 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259348 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-logs\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259434 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259479 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.259907 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-logs\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.260051 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.270581 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.270804 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-config-data\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.276427 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-scripts\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.277617 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.279326 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/c2656030-bade-452d-a4cb-ba4c12935750-kube-api-access-76fj8\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.290287 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.387647 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.764578 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a4284e-2a4a-44fe-9c6f-644b99abb5f1" path="/var/lib/kubelet/pods/01a4284e-2a4a-44fe-9c6f-644b99abb5f1/volumes" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.765778 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2af291-1e93-47b6-bff9-14bba4d9d553" path="/var/lib/kubelet/pods/8d2af291-1e93-47b6-bff9-14bba4d9d553/volumes" Jan 22 10:56:23 crc kubenswrapper[4975]: I0122 10:56:23.766232 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09fa03d-3235-4032-bebb-2f4b21546f90" path="/var/lib/kubelet/pods/c09fa03d-3235-4032-bebb-2f4b21546f90/volumes" Jan 22 10:56:24 crc kubenswrapper[4975]: E0122 10:56:24.088926 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 22 10:56:24 crc kubenswrapper[4975]: E0122 10:56:24.089086 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnl69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-x24jb_openstack(f58e68a6-dee2-49a4-be2d-e6b1ca69b19f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:56:24 crc kubenswrapper[4975]: E0122 10:56:24.090350 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-x24jb" podUID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.192437 4975 scope.go:117] "RemoveContainer" containerID="fe281ef269f88cf346dee77c293eede88b9a4e6e82d03708bccc8778c529e6f6" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.550260 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.552299 4975 scope.go:117] "RemoveContainer" containerID="eb94da38f2d7f03ab1a057ab93aa2286bd8308fb0645db007a5dc0b980f81d2f" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.598694 4975 scope.go:117] "RemoveContainer" containerID="cfc2c9d17bd2dfb21be47a27a979f7b464595dcbae25fd440b5c65a3e0adaea2" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.692445 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-svc\") pod \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.692574 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-sb\") pod \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.692632 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-swift-storage-0\") pod \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.692695 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4zgp\" (UniqueName: \"kubernetes.io/projected/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-kube-api-access-n4zgp\") pod \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.692763 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-nb\") pod \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.692815 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-config\") pod \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\" (UID: \"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1\") " Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.701918 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-kube-api-access-n4zgp" (OuterVolumeSpecName: "kube-api-access-n4zgp") pod "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" (UID: "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1"). InnerVolumeSpecName "kube-api-access-n4zgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.737879 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bbdd45bdd-ckdgl"] Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.743290 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" (UID: "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.750993 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" (UID: "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.755892 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" (UID: "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.763205 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" (UID: "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.776586 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-config" (OuterVolumeSpecName: "config") pod "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" (UID: "23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.794551 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.794578 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4zgp\" (UniqueName: \"kubernetes.io/projected/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-kube-api-access-n4zgp\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.794588 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.794597 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.794636 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.794645 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.866814 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cb95f4cc6-jdv4m"] Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.872228 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbdd45bdd-ckdgl" event={"ID":"fc5d94b3-48d8-485a-b307-9a011d77ee76","Type":"ContainerStarted","Data":"7d705752375c02cce9a9eaa6ac773f1e737ce3d67c65850e8a3af5d224339ec5"} Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.873233 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b986d67d5-jkmkh" event={"ID":"2a762d84-4b11-4b4b-91e1-51e43be6ff86","Type":"ContainerStarted","Data":"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7"} Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.876967 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.876978 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" event={"ID":"23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1","Type":"ContainerDied","Data":"2fd71df3d83069b22058a1c8d4452517802f7e0d52e054d4fbe5ba9f54e59688"} Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.877065 4975 scope.go:117] "RemoveContainer" containerID="751f5849adc66c64e151d777eb035ac1de0e1d14f0c21f2523604318d4638867" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.881125 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerStarted","Data":"dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85"} Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.883025 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hr72x" event={"ID":"18942582-238b-4674-831a-d1a2284c912c","Type":"ContainerStarted","Data":"7998680a9cd7cf7a32681593119d7938b68da8ed5d638ba92eb6f9105c0b83cc"} Jan 22 10:56:24 crc kubenswrapper[4975]: E0122 10:56:24.884273 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-x24jb" podUID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" Jan 22 10:56:24 crc kubenswrapper[4975]: W0122 10:56:24.889695 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2027dc6_aa67_467c_ad8d_e31ccb6d66ee.slice/crio-d675e7f1c061d5886eb9e211bcb36e7b73baf7fc472150203029b2bb7c8f072b WatchSource:0}: Error finding container d675e7f1c061d5886eb9e211bcb36e7b73baf7fc472150203029b2bb7c8f072b: Status 404 returned error can't find the container with id d675e7f1c061d5886eb9e211bcb36e7b73baf7fc472150203029b2bb7c8f072b Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.901780 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hr72x" podStartSLOduration=1.9900097140000002 podStartE2EDuration="36.901757692s" podCreationTimestamp="2026-01-22 10:55:48 +0000 UTC" firstStartedPulling="2026-01-22 10:55:49.350453292 +0000 UTC m=+1142.008489402" lastFinishedPulling="2026-01-22 10:56:24.26220126 +0000 UTC m=+1176.920237380" observedRunningTime="2026-01-22 10:56:24.898009411 +0000 UTC m=+1177.556045521" watchObservedRunningTime="2026-01-22 10:56:24.901757692 +0000 UTC m=+1177.559793802" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.904259 4975 scope.go:117] "RemoveContainer" containerID="79521bd816a20ac3d6fb126f93f3a04b9bbb1c8b538d914765fe3fd8b3363495" Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.954413 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zl6sk"] Jan 22 10:56:24 crc kubenswrapper[4975]: I0122 10:56:24.970129 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zl6sk"] Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.057184 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n7gh8"] Jan 22 10:56:25 crc kubenswrapper[4975]: W0122 10:56:25.072506 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7594ce51_7b3b_42e1_a47c_791edf907087.slice/crio-6a6e629dd51c31a290044087f841969fdd3df503b3eb81e7f999d6b04a6aa164 WatchSource:0}: Error finding container 6a6e629dd51c31a290044087f841969fdd3df503b3eb81e7f999d6b04a6aa164: Status 404 returned error can't find the container with id 6a6e629dd51c31a290044087f841969fdd3df503b3eb81e7f999d6b04a6aa164 Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.115998 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.765448 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" path="/var/lib/kubelet/pods/23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1/volumes" Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.896258 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n7gh8" event={"ID":"7594ce51-7b3b-42e1-a47c-791edf907087","Type":"ContainerStarted","Data":"9b57a69c63f587768367e76faa3583720078a51a532debd6dacd7cc730572cb7"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.896313 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n7gh8" event={"ID":"7594ce51-7b3b-42e1-a47c-791edf907087","Type":"ContainerStarted","Data":"6a6e629dd51c31a290044087f841969fdd3df503b3eb81e7f999d6b04a6aa164"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.903165 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465645fc-a907-44d6-8d2b-30e29b40187b","Type":"ContainerStarted","Data":"b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.903202 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465645fc-a907-44d6-8d2b-30e29b40187b","Type":"ContainerStarted","Data":"4b8a29cca6111a762f9c729a5939124f82c8dc23ba7afa02d09cd6de7ea411d3"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.917780 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n7gh8" podStartSLOduration=27.917762669 podStartE2EDuration="27.917762669s" podCreationTimestamp="2026-01-22 10:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:25.916301215 +0000 UTC m=+1178.574337325" watchObservedRunningTime="2026-01-22 10:56:25.917762669 +0000 UTC m=+1178.575798779" Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.930162 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b986d67d5-jkmkh" event={"ID":"2a762d84-4b11-4b4b-91e1-51e43be6ff86","Type":"ContainerStarted","Data":"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.930426 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b986d67d5-jkmkh" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon-log" containerID="cri-o://2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7" gracePeriod=30 Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.931870 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b986d67d5-jkmkh" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon" containerID="cri-o://c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca" gracePeriod=30 Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.940599 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbdd45bdd-ckdgl" event={"ID":"fc5d94b3-48d8-485a-b307-9a011d77ee76","Type":"ContainerStarted","Data":"e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.940650 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbdd45bdd-ckdgl" event={"ID":"fc5d94b3-48d8-485a-b307-9a011d77ee76","Type":"ContainerStarted","Data":"39b1484aa175e8832a4a3a02dba23da796deff516ab271d29c22c353de6d296d"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.943643 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb95f4cc6-jdv4m" event={"ID":"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee","Type":"ContainerStarted","Data":"f9b7b842b9a1fcbacc676c80b7c50fbf7b7adfc180c7ac21bfbc6da00ffd2fbe"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.943679 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb95f4cc6-jdv4m" event={"ID":"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee","Type":"ContainerStarted","Data":"51749b6aed12e4e01ce685b055ffc9b8c63a4cbab89f826e362e8865273a7c83"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.943691 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb95f4cc6-jdv4m" event={"ID":"d2027dc6-aa67-467c-ad8d-e31ccb6d66ee","Type":"ContainerStarted","Data":"d675e7f1c061d5886eb9e211bcb36e7b73baf7fc472150203029b2bb7c8f072b"} Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.969453 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:56:25 crc kubenswrapper[4975]: I0122 10:56:25.972016 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b986d67d5-jkmkh" podStartSLOduration=5.283902433 podStartE2EDuration="36.971994645s" podCreationTimestamp="2026-01-22 10:55:49 +0000 UTC" firstStartedPulling="2026-01-22 10:55:50.944347215 +0000 UTC m=+1143.602383325" lastFinishedPulling="2026-01-22 10:56:22.632439387 +0000 UTC m=+1175.290475537" observedRunningTime="2026-01-22 10:56:25.959917283 +0000 UTC m=+1178.617953403" watchObservedRunningTime="2026-01-22 10:56:25.971994645 +0000 UTC m=+1178.630030755" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.025679 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cb95f4cc6-jdv4m" podStartSLOduration=30.025664479 podStartE2EDuration="30.025664479s" podCreationTimestamp="2026-01-22 10:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:25.994708206 +0000 UTC m=+1178.652744326" watchObservedRunningTime="2026-01-22 10:56:26.025664479 +0000 UTC m=+1178.683700589" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.029468 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bbdd45bdd-ckdgl" podStartSLOduration=30.02945732 podStartE2EDuration="30.02945732s" podCreationTimestamp="2026-01-22 10:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:26.013913723 +0000 UTC m=+1178.671949833" watchObservedRunningTime="2026-01-22 10:56:26.02945732 +0000 UTC m=+1178.687493430" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.653020 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.653317 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.744489 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.748540 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.952738 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerStarted","Data":"31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de"} Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.954090 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c2656030-bade-452d-a4cb-ba4c12935750","Type":"ContainerStarted","Data":"49adfbb458fcdae3bdc711778acedac8940b15179524bb30862fa2112098c145"} Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.954126 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c2656030-bade-452d-a4cb-ba4c12935750","Type":"ContainerStarted","Data":"6a24b19ff1b21047200bd354e933e08efa2fcd91aaac0ffa2fbb60e2caee51e2"} Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.956798 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465645fc-a907-44d6-8d2b-30e29b40187b","Type":"ContainerStarted","Data":"f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578"} Jan 22 10:56:26 crc kubenswrapper[4975]: I0122 10:56:26.977268 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=28.977250952 podStartE2EDuration="28.977250952s" podCreationTimestamp="2026-01-22 10:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:26.975174402 +0000 UTC m=+1179.633210512" watchObservedRunningTime="2026-01-22 10:56:26.977250952 +0000 UTC m=+1179.635287062" Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.436678 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.436729 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.436774 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.437408 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96a4c784d660fbbfb3a8a9514335df4fa2fd66b35e30bd92a673d8e26178170f"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.437460 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://96a4c784d660fbbfb3a8a9514335df4fa2fd66b35e30bd92a673d8e26178170f" gracePeriod=600 Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.709113 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-zl6sk" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.976113 4975 generic.go:334] "Generic (PLEG): container finished" podID="18942582-238b-4674-831a-d1a2284c912c" containerID="7998680a9cd7cf7a32681593119d7938b68da8ed5d638ba92eb6f9105c0b83cc" exitCode=0 Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.976220 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hr72x" event={"ID":"18942582-238b-4674-831a-d1a2284c912c","Type":"ContainerDied","Data":"7998680a9cd7cf7a32681593119d7938b68da8ed5d638ba92eb6f9105c0b83cc"} Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.979971 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c2656030-bade-452d-a4cb-ba4c12935750","Type":"ContainerStarted","Data":"3313ee02c50a816d3513f2cf5bb15cd21c422a33ae76ce77a75d5636bf912605"} Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.988816 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="96a4c784d660fbbfb3a8a9514335df4fa2fd66b35e30bd92a673d8e26178170f" exitCode=0 Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.989897 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"96a4c784d660fbbfb3a8a9514335df4fa2fd66b35e30bd92a673d8e26178170f"} Jan 22 10:56:27 crc kubenswrapper[4975]: I0122 10:56:27.989947 4975 scope.go:117] "RemoveContainer" containerID="16a708e48cbc5f721c35b9e8acfcd4898f3835379f880a9ee0b3d9dc8b03fa83" Jan 22 10:56:28 crc kubenswrapper[4975]: I0122 10:56:28.018772 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.018757379 podStartE2EDuration="5.018757379s" podCreationTimestamp="2026-01-22 10:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:28.017368236 +0000 UTC m=+1180.675404346" watchObservedRunningTime="2026-01-22 10:56:28.018757379 +0000 UTC m=+1180.676793489" Jan 22 10:56:29 crc kubenswrapper[4975]: I0122 10:56:29.004838 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:29 crc kubenswrapper[4975]: I0122 10:56:29.004911 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:29 crc kubenswrapper[4975]: I0122 10:56:29.004936 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:29 crc kubenswrapper[4975]: I0122 10:56:29.004957 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:29 crc kubenswrapper[4975]: I0122 10:56:29.058198 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:29 crc kubenswrapper[4975]: I0122 10:56:29.086365 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:30 crc kubenswrapper[4975]: I0122 10:56:30.025431 4975 generic.go:334] "Generic (PLEG): container finished" podID="7594ce51-7b3b-42e1-a47c-791edf907087" containerID="9b57a69c63f587768367e76faa3583720078a51a532debd6dacd7cc730572cb7" exitCode=0 Jan 22 10:56:30 crc kubenswrapper[4975]: I0122 10:56:30.025517 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n7gh8" event={"ID":"7594ce51-7b3b-42e1-a47c-791edf907087","Type":"ContainerDied","Data":"9b57a69c63f587768367e76faa3583720078a51a532debd6dacd7cc730572cb7"} Jan 22 10:56:30 crc kubenswrapper[4975]: I0122 10:56:30.030247 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"16cfad28f24f35696e7c92e7ab98650eab8e998609b5fcdbdad61136fb9627fd"} Jan 22 10:56:30 crc kubenswrapper[4975]: I0122 10:56:30.031555 4975 generic.go:334] "Generic (PLEG): container finished" podID="590caacf-c377-4aed-abad-b927b28f51f5" containerID="68c44397d85078ceb1f615b35b0f4489644aee354873ff41d24b5c9c40d7d2a8" exitCode=0 Jan 22 10:56:30 crc kubenswrapper[4975]: I0122 10:56:30.031577 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mkhfs" event={"ID":"590caacf-c377-4aed-abad-b927b28f51f5","Type":"ContainerDied","Data":"68c44397d85078ceb1f615b35b0f4489644aee354873ff41d24b5c9c40d7d2a8"} Jan 22 10:56:30 crc kubenswrapper[4975]: I0122 10:56:30.295261 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:56:31 crc kubenswrapper[4975]: I0122 10:56:31.954757 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:31 crc kubenswrapper[4975]: I0122 10:56:31.974783 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.374615 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hr72x" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.390955 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.391074 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.421944 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt2d5\" (UniqueName: \"kubernetes.io/projected/18942582-238b-4674-831a-d1a2284c912c-kube-api-access-bt2d5\") pod \"18942582-238b-4674-831a-d1a2284c912c\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.422045 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-combined-ca-bundle\") pod \"18942582-238b-4674-831a-d1a2284c912c\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.422077 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-scripts\") pod \"18942582-238b-4674-831a-d1a2284c912c\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.422117 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-config-data\") pod \"18942582-238b-4674-831a-d1a2284c912c\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.422279 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18942582-238b-4674-831a-d1a2284c912c-logs\") pod \"18942582-238b-4674-831a-d1a2284c912c\" (UID: \"18942582-238b-4674-831a-d1a2284c912c\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.423546 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18942582-238b-4674-831a-d1a2284c912c-logs" (OuterVolumeSpecName: "logs") pod "18942582-238b-4674-831a-d1a2284c912c" (UID: "18942582-238b-4674-831a-d1a2284c912c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.423919 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.429275 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18942582-238b-4674-831a-d1a2284c912c-kube-api-access-bt2d5" (OuterVolumeSpecName: "kube-api-access-bt2d5") pod "18942582-238b-4674-831a-d1a2284c912c" (UID: "18942582-238b-4674-831a-d1a2284c912c"). InnerVolumeSpecName "kube-api-access-bt2d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.435079 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-scripts" (OuterVolumeSpecName: "scripts") pod "18942582-238b-4674-831a-d1a2284c912c" (UID: "18942582-238b-4674-831a-d1a2284c912c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.441439 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.478600 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.483141 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.499545 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18942582-238b-4674-831a-d1a2284c912c" (UID: "18942582-238b-4674-831a-d1a2284c912c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.521049 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-config-data" (OuterVolumeSpecName: "config-data") pod "18942582-238b-4674-831a-d1a2284c912c" (UID: "18942582-238b-4674-831a-d1a2284c912c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523572 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-combined-ca-bundle\") pod \"7594ce51-7b3b-42e1-a47c-791edf907087\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523632 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-775pm\" (UniqueName: \"kubernetes.io/projected/590caacf-c377-4aed-abad-b927b28f51f5-kube-api-access-775pm\") pod \"590caacf-c377-4aed-abad-b927b28f51f5\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523657 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-fernet-keys\") pod \"7594ce51-7b3b-42e1-a47c-791edf907087\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523686 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-config\") pod \"590caacf-c377-4aed-abad-b927b28f51f5\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523717 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-credential-keys\") pod \"7594ce51-7b3b-42e1-a47c-791edf907087\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523733 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-combined-ca-bundle\") pod \"590caacf-c377-4aed-abad-b927b28f51f5\" (UID: \"590caacf-c377-4aed-abad-b927b28f51f5\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523787 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-scripts\") pod \"7594ce51-7b3b-42e1-a47c-791edf907087\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523814 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk42s\" (UniqueName: \"kubernetes.io/projected/7594ce51-7b3b-42e1-a47c-791edf907087-kube-api-access-lk42s\") pod \"7594ce51-7b3b-42e1-a47c-791edf907087\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.523866 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-config-data\") pod \"7594ce51-7b3b-42e1-a47c-791edf907087\" (UID: \"7594ce51-7b3b-42e1-a47c-791edf907087\") " Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.524554 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18942582-238b-4674-831a-d1a2284c912c-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.524572 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt2d5\" (UniqueName: \"kubernetes.io/projected/18942582-238b-4674-831a-d1a2284c912c-kube-api-access-bt2d5\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.524583 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.524591 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.524599 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18942582-238b-4674-831a-d1a2284c912c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.527575 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7594ce51-7b3b-42e1-a47c-791edf907087" (UID: "7594ce51-7b3b-42e1-a47c-791edf907087"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.530722 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7594ce51-7b3b-42e1-a47c-791edf907087" (UID: "7594ce51-7b3b-42e1-a47c-791edf907087"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.531528 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590caacf-c377-4aed-abad-b927b28f51f5-kube-api-access-775pm" (OuterVolumeSpecName: "kube-api-access-775pm") pod "590caacf-c377-4aed-abad-b927b28f51f5" (UID: "590caacf-c377-4aed-abad-b927b28f51f5"). InnerVolumeSpecName "kube-api-access-775pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.533484 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7594ce51-7b3b-42e1-a47c-791edf907087-kube-api-access-lk42s" (OuterVolumeSpecName: "kube-api-access-lk42s") pod "7594ce51-7b3b-42e1-a47c-791edf907087" (UID: "7594ce51-7b3b-42e1-a47c-791edf907087"). InnerVolumeSpecName "kube-api-access-lk42s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.543465 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-scripts" (OuterVolumeSpecName: "scripts") pod "7594ce51-7b3b-42e1-a47c-791edf907087" (UID: "7594ce51-7b3b-42e1-a47c-791edf907087"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.559730 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7594ce51-7b3b-42e1-a47c-791edf907087" (UID: "7594ce51-7b3b-42e1-a47c-791edf907087"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.565420 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-config" (OuterVolumeSpecName: "config") pod "590caacf-c377-4aed-abad-b927b28f51f5" (UID: "590caacf-c377-4aed-abad-b927b28f51f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.573519 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-config-data" (OuterVolumeSpecName: "config-data") pod "7594ce51-7b3b-42e1-a47c-791edf907087" (UID: "7594ce51-7b3b-42e1-a47c-791edf907087"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.581542 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "590caacf-c377-4aed-abad-b927b28f51f5" (UID: "590caacf-c377-4aed-abad-b927b28f51f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626261 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626311 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-775pm\" (UniqueName: \"kubernetes.io/projected/590caacf-c377-4aed-abad-b927b28f51f5-kube-api-access-775pm\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626336 4975 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626374 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626391 4975 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626408 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590caacf-c377-4aed-abad-b927b28f51f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626424 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626443 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk42s\" (UniqueName: \"kubernetes.io/projected/7594ce51-7b3b-42e1-a47c-791edf907087-kube-api-access-lk42s\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:33 crc kubenswrapper[4975]: I0122 10:56:33.626459 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7594ce51-7b3b-42e1-a47c-791edf907087-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.073845 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n7gh8" event={"ID":"7594ce51-7b3b-42e1-a47c-791edf907087","Type":"ContainerDied","Data":"6a6e629dd51c31a290044087f841969fdd3df503b3eb81e7f999d6b04a6aa164"} Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.074142 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6e629dd51c31a290044087f841969fdd3df503b3eb81e7f999d6b04a6aa164" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.073862 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n7gh8" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.076685 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerStarted","Data":"543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336"} Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.083657 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hr72x" event={"ID":"18942582-238b-4674-831a-d1a2284c912c","Type":"ContainerDied","Data":"7166bcea79a914fc51d5696903e56ca980959d9c2079dfaa22f050c3e0bf4b58"} Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.083697 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7166bcea79a914fc51d5696903e56ca980959d9c2079dfaa22f050c3e0bf4b58" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.083668 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hr72x" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.085865 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mkhfs" event={"ID":"590caacf-c377-4aed-abad-b927b28f51f5","Type":"ContainerDied","Data":"cc0cb0e6faf6bf07454f0d829797f13ef86cf29b210a6a8f2152c767a1248089"} Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.085897 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc0cb0e6faf6bf07454f0d829797f13ef86cf29b210a6a8f2152c767a1248089" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.085995 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mkhfs" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.086352 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.086379 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.561624 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85bb784b78-f8g8c"] Jan 22 10:56:34 crc kubenswrapper[4975]: E0122 10:56:34.562051 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7594ce51-7b3b-42e1-a47c-791edf907087" containerName="keystone-bootstrap" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562065 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="7594ce51-7b3b-42e1-a47c-791edf907087" containerName="keystone-bootstrap" Jan 22 10:56:34 crc kubenswrapper[4975]: E0122 10:56:34.562078 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="init" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562088 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="init" Jan 22 10:56:34 crc kubenswrapper[4975]: E0122 10:56:34.562106 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562114 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" Jan 22 10:56:34 crc kubenswrapper[4975]: E0122 10:56:34.562126 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590caacf-c377-4aed-abad-b927b28f51f5" containerName="neutron-db-sync" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562133 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="590caacf-c377-4aed-abad-b927b28f51f5" containerName="neutron-db-sync" Jan 22 10:56:34 crc kubenswrapper[4975]: E0122 10:56:34.562153 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18942582-238b-4674-831a-d1a2284c912c" containerName="placement-db-sync" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562161 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="18942582-238b-4674-831a-d1a2284c912c" containerName="placement-db-sync" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562405 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="590caacf-c377-4aed-abad-b927b28f51f5" containerName="neutron-db-sync" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562420 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="7594ce51-7b3b-42e1-a47c-791edf907087" containerName="keystone-bootstrap" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562441 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ee3bef-bd02-40d9-8f64-d6cb3f79f9b1" containerName="dnsmasq-dns" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.562470 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="18942582-238b-4674-831a-d1a2284c912c" containerName="placement-db-sync" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.563524 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.573367 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b26c7" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.575777 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.575835 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.575846 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.575779 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.586247 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85bb784b78-f8g8c"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.664295 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79786fd675-77kk9"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.665370 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.684161 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dr2s2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.684441 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.684560 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.684661 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.684758 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.684844 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.695839 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79786fd675-77kk9"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748291 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvqv\" (UniqueName: \"kubernetes.io/projected/3dbff787-f548-4837-acb9-e1538c513330-kube-api-access-trvqv\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748381 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-config-data\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748404 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbff787-f548-4837-acb9-e1538c513330-logs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748442 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-scripts\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748461 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-combined-ca-bundle\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748479 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-public-tls-certs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.748514 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-internal-tls-certs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.762300 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bb9bdd9dd-tqvz2"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.774803 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.785383 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bt7hf"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.787180 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.787779 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nzqbb" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.787959 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.788191 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.788295 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.804442 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bt7hf"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850827 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-combined-ca-bundle\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850881 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-fernet-keys\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850904 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850920 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-ovndb-tls-certs\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850937 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-config\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850951 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl2k2\" (UniqueName: \"kubernetes.io/projected/a01d16cc-a78f-4c0c-9893-6739b85eb257-kube-api-access-jl2k2\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850974 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-config-data\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.850990 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbff787-f548-4837-acb9-e1538c513330-logs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851039 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-scripts\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851058 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-config\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851076 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-combined-ca-bundle\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851092 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-public-tls-certs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851109 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-credential-keys\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851134 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwkm\" (UniqueName: \"kubernetes.io/projected/40dba24f-eb65-4337-b3e8-6498018ae0cd-kube-api-access-ctwkm\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851153 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-scripts\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851169 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-internal-tls-certs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851194 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-httpd-config\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851209 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-config-data\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851225 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-combined-ca-bundle\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851247 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851264 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgrt\" (UniqueName: \"kubernetes.io/projected/a3a8b4a6-2a15-4752-918f-066167d8699d-kube-api-access-cbgrt\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851291 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851311 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-internal-tls-certs\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851330 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851370 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-public-tls-certs\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.851391 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trvqv\" (UniqueName: \"kubernetes.io/projected/3dbff787-f548-4837-acb9-e1538c513330-kube-api-access-trvqv\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.861746 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bb9bdd9dd-tqvz2"] Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.862082 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbff787-f548-4837-acb9-e1538c513330-logs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.866148 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-combined-ca-bundle\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.869310 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trvqv\" (UniqueName: \"kubernetes.io/projected/3dbff787-f548-4837-acb9-e1538c513330-kube-api-access-trvqv\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.872635 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-public-tls-certs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.874061 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-scripts\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.874232 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-internal-tls-certs\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.879965 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbff787-f548-4837-acb9-e1538c513330-config-data\") pod \"placement-85bb784b78-f8g8c\" (UID: \"3dbff787-f548-4837-acb9-e1538c513330\") " pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.880396 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955571 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-httpd-config\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955610 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-config-data\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955631 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-combined-ca-bundle\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955657 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955674 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgrt\" (UniqueName: \"kubernetes.io/projected/a3a8b4a6-2a15-4752-918f-066167d8699d-kube-api-access-cbgrt\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955700 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955721 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-internal-tls-certs\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955741 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955763 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-public-tls-certs\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955794 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-combined-ca-bundle\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955816 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-fernet-keys\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955832 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955847 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-ovndb-tls-certs\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-config\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955877 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl2k2\" (UniqueName: \"kubernetes.io/projected/a01d16cc-a78f-4c0c-9893-6739b85eb257-kube-api-access-jl2k2\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955914 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-config\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955931 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-credential-keys\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955953 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwkm\" (UniqueName: \"kubernetes.io/projected/40dba24f-eb65-4337-b3e8-6498018ae0cd-kube-api-access-ctwkm\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.955969 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-scripts\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.965796 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-scripts\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.970084 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.971577 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.971823 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-public-tls-certs\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.972918 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.973485 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.973987 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-combined-ca-bundle\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.975241 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-config\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.976428 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-combined-ca-bundle\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.976872 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-fernet-keys\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.976912 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-internal-tls-certs\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.979272 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-config\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.986883 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-credential-keys\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.988944 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-httpd-config\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.990823 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8b4a6-2a15-4752-918f-066167d8699d-config-data\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.991535 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgrt\" (UniqueName: \"kubernetes.io/projected/a3a8b4a6-2a15-4752-918f-066167d8699d-kube-api-access-cbgrt\") pod \"keystone-79786fd675-77kk9\" (UID: \"a3a8b4a6-2a15-4752-918f-066167d8699d\") " pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.992464 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwkm\" (UniqueName: \"kubernetes.io/projected/40dba24f-eb65-4337-b3e8-6498018ae0cd-kube-api-access-ctwkm\") pod \"dnsmasq-dns-55f844cf75-bt7hf\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.997162 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl2k2\" (UniqueName: \"kubernetes.io/projected/a01d16cc-a78f-4c0c-9893-6739b85eb257-kube-api-access-jl2k2\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:34 crc kubenswrapper[4975]: I0122 10:56:34.997666 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-ovndb-tls-certs\") pod \"neutron-6bb9bdd9dd-tqvz2\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.025792 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.077857 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d6d6969d8-6mpvr"] Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.079210 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.089379 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d6d6969d8-6mpvr"] Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.134723 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.166666 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.272159 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-config\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.272217 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstg8\" (UniqueName: \"kubernetes.io/projected/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-kube-api-access-kstg8\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.272258 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-combined-ca-bundle\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.272277 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-ovndb-tls-certs\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.272297 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-httpd-config\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.380492 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-config\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.380595 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstg8\" (UniqueName: \"kubernetes.io/projected/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-kube-api-access-kstg8\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.380657 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-combined-ca-bundle\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.380688 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-ovndb-tls-certs\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.380722 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-httpd-config\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.387559 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-config\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.387895 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-httpd-config\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.397545 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-ovndb-tls-certs\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.403745 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-combined-ca-bundle\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.412359 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstg8\" (UniqueName: \"kubernetes.io/projected/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-kube-api-access-kstg8\") pod \"neutron-5d6d6969d8-6mpvr\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.577837 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79786fd675-77kk9"] Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.711102 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.756386 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85bb784b78-f8g8c"] Jan 22 10:56:35 crc kubenswrapper[4975]: W0122 10:56:35.960169 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40dba24f_eb65_4337_b3e8_6498018ae0cd.slice/crio-03bf6c3c89deb9815bd23aea1ad006734d2417ca2f3ed74399836dec59563aa4 WatchSource:0}: Error finding container 03bf6c3c89deb9815bd23aea1ad006734d2417ca2f3ed74399836dec59563aa4: Status 404 returned error can't find the container with id 03bf6c3c89deb9815bd23aea1ad006734d2417ca2f3ed74399836dec59563aa4 Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.974801 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bt7hf"] Jan 22 10:56:35 crc kubenswrapper[4975]: W0122 10:56:35.984137 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01d16cc_a78f_4c0c_9893_6739b85eb257.slice/crio-7701914175bc730658af2601bd625f923cd48d249b39b7a379b2f153dae39600 WatchSource:0}: Error finding container 7701914175bc730658af2601bd625f923cd48d249b39b7a379b2f153dae39600: Status 404 returned error can't find the container with id 7701914175bc730658af2601bd625f923cd48d249b39b7a379b2f153dae39600 Jan 22 10:56:35 crc kubenswrapper[4975]: I0122 10:56:35.984262 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bb9bdd9dd-tqvz2"] Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.149110 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79786fd675-77kk9" event={"ID":"a3a8b4a6-2a15-4752-918f-066167d8699d","Type":"ContainerStarted","Data":"3d1577f96278444ad8e10d62ae73b1a28aa0c1fd72f0936e81d1bb68c2a37c03"} Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.149499 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79786fd675-77kk9" event={"ID":"a3a8b4a6-2a15-4752-918f-066167d8699d","Type":"ContainerStarted","Data":"822f4698f8f1aacfbb9ce10d61cb187124693499caef80155fa6cc967c6babae"} Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.149723 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.155409 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bb784b78-f8g8c" event={"ID":"3dbff787-f548-4837-acb9-e1538c513330","Type":"ContainerStarted","Data":"4d803742a42613eaa176961d1a88c8626e4a84a7e0b110708ac17dbf7dc5808e"} Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.155466 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bb784b78-f8g8c" event={"ID":"3dbff787-f548-4837-acb9-e1538c513330","Type":"ContainerStarted","Data":"91f0fd548d687b929be90d17306801c9fb337a9b5c120f828818486fa7526f4e"} Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.159364 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bb9bdd9dd-tqvz2" event={"ID":"a01d16cc-a78f-4c0c-9893-6739b85eb257","Type":"ContainerStarted","Data":"7701914175bc730658af2601bd625f923cd48d249b39b7a379b2f153dae39600"} Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.164476 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" event={"ID":"40dba24f-eb65-4337-b3e8-6498018ae0cd","Type":"ContainerStarted","Data":"03bf6c3c89deb9815bd23aea1ad006734d2417ca2f3ed74399836dec59563aa4"} Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.340399 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79786fd675-77kk9" podStartSLOduration=2.340382618 podStartE2EDuration="2.340382618s" podCreationTimestamp="2026-01-22 10:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:36.173214401 +0000 UTC m=+1188.831250511" watchObservedRunningTime="2026-01-22 10:56:36.340382618 +0000 UTC m=+1188.998418718" Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.345570 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d6d6969d8-6mpvr"] Jan 22 10:56:36 crc kubenswrapper[4975]: W0122 10:56:36.403500 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8e0f17_b39b_4c05_abde_d1d73dd7b20c.slice/crio-936d6a6471ad562a0fc2b825a39b585fa64f4f2568c09cafb8a57751e4faf8e5 WatchSource:0}: Error finding container 936d6a6471ad562a0fc2b825a39b585fa64f4f2568c09cafb8a57751e4faf8e5: Status 404 returned error can't find the container with id 936d6a6471ad562a0fc2b825a39b585fa64f4f2568c09cafb8a57751e4faf8e5 Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.658170 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.719182 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.719295 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:56:36 crc kubenswrapper[4975]: I0122 10:56:36.746612 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cb95f4cc6-jdv4m" podUID="d2027dc6-aa67-467c-ad8d-e31ccb6d66ee" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.196553 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bb9bdd9dd-tqvz2" event={"ID":"a01d16cc-a78f-4c0c-9893-6739b85eb257","Type":"ContainerStarted","Data":"b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347"} Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.196876 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bb9bdd9dd-tqvz2" event={"ID":"a01d16cc-a78f-4c0c-9893-6739b85eb257","Type":"ContainerStarted","Data":"debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f"} Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.197923 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.201785 4975 generic.go:334] "Generic (PLEG): container finished" podID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerID="434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369" exitCode=0 Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.201838 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" event={"ID":"40dba24f-eb65-4337-b3e8-6498018ae0cd","Type":"ContainerDied","Data":"434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369"} Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.222460 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bb9bdd9dd-tqvz2" podStartSLOduration=3.222444944 podStartE2EDuration="3.222444944s" podCreationTimestamp="2026-01-22 10:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:37.214721038 +0000 UTC m=+1189.872757148" watchObservedRunningTime="2026-01-22 10:56:37.222444944 +0000 UTC m=+1189.880481054" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.275465 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6d6969d8-6mpvr" event={"ID":"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c","Type":"ContainerStarted","Data":"170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289"} Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.275822 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6d6969d8-6mpvr" event={"ID":"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c","Type":"ContainerStarted","Data":"936d6a6471ad562a0fc2b825a39b585fa64f4f2568c09cafb8a57751e4faf8e5"} Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.300300 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bb784b78-f8g8c" event={"ID":"3dbff787-f548-4837-acb9-e1538c513330","Type":"ContainerStarted","Data":"b940ae297c0517c35645f0f0d29062ae760b86ad9f889c5d59b72b200c8bd204"} Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.337252 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bb9bdd9dd-tqvz2"] Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.368578 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85bb784b78-f8g8c" podStartSLOduration=3.368564481 podStartE2EDuration="3.368564481s" podCreationTimestamp="2026-01-22 10:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:37.364090972 +0000 UTC m=+1190.022127082" watchObservedRunningTime="2026-01-22 10:56:37.368564481 +0000 UTC m=+1190.026600581" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.422642 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54575f849-qtpk8"] Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.424406 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.430698 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.430704 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.435216 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54575f849-qtpk8"] Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.563752 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-public-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.564153 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-combined-ca-bundle\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.564172 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-ovndb-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.564224 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx22m\" (UniqueName: \"kubernetes.io/projected/17be82a9-4845-4b3d-85ca-82489d4687b2-kube-api-access-qx22m\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.564248 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-internal-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.564268 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-config\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.564314 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-httpd-config\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666237 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-public-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666300 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-combined-ca-bundle\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666318 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-ovndb-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666376 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx22m\" (UniqueName: \"kubernetes.io/projected/17be82a9-4845-4b3d-85ca-82489d4687b2-kube-api-access-qx22m\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666401 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-internal-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666421 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-config\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.666462 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-httpd-config\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.674857 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-internal-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.675410 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-combined-ca-bundle\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.675510 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-public-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.683460 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-httpd-config\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.683641 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-config\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.683965 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17be82a9-4845-4b3d-85ca-82489d4687b2-ovndb-tls-certs\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.695209 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx22m\" (UniqueName: \"kubernetes.io/projected/17be82a9-4845-4b3d-85ca-82489d4687b2-kube-api-access-qx22m\") pod \"neutron-54575f849-qtpk8\" (UID: \"17be82a9-4845-4b3d-85ca-82489d4687b2\") " pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:37 crc kubenswrapper[4975]: I0122 10:56:37.796696 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.311110 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" event={"ID":"40dba24f-eb65-4337-b3e8-6498018ae0cd","Type":"ContainerStarted","Data":"15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075"} Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.311509 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.313216 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-97z49" event={"ID":"abbc90ac-2417-470d-84cf-ddc823f23e50","Type":"ContainerStarted","Data":"9069ee30524a475c7fcf94d4e779314e50eda7feb4961d18af59b94b2d3b53d5"} Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.317746 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6d6969d8-6mpvr" event={"ID":"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c","Type":"ContainerStarted","Data":"e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59"} Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.317784 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.317799 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.318150 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.334692 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54575f849-qtpk8"] Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.337067 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" podStartSLOduration=4.337046215 podStartE2EDuration="4.337046215s" podCreationTimestamp="2026-01-22 10:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:38.329542043 +0000 UTC m=+1190.987578153" watchObservedRunningTime="2026-01-22 10:56:38.337046215 +0000 UTC m=+1190.995082325" Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.357662 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d6d6969d8-6mpvr" podStartSLOduration=3.357645755 podStartE2EDuration="3.357645755s" podCreationTimestamp="2026-01-22 10:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:38.345779857 +0000 UTC m=+1191.003815977" watchObservedRunningTime="2026-01-22 10:56:38.357645755 +0000 UTC m=+1191.015681865" Jan 22 10:56:38 crc kubenswrapper[4975]: W0122 10:56:38.364083 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17be82a9_4845_4b3d_85ca_82489d4687b2.slice/crio-cdfc0ac1c896841b45eca832b9a7feb4ccd210a31dd24e274619a78b0dd379a6 WatchSource:0}: Error finding container cdfc0ac1c896841b45eca832b9a7feb4ccd210a31dd24e274619a78b0dd379a6: Status 404 returned error can't find the container with id cdfc0ac1c896841b45eca832b9a7feb4ccd210a31dd24e274619a78b0dd379a6 Jan 22 10:56:38 crc kubenswrapper[4975]: I0122 10:56:38.372311 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-97z49" podStartSLOduration=3.502629344 podStartE2EDuration="50.372288981s" podCreationTimestamp="2026-01-22 10:55:48 +0000 UTC" firstStartedPulling="2026-01-22 10:55:50.492443537 +0000 UTC m=+1143.150479647" lastFinishedPulling="2026-01-22 10:56:37.362103174 +0000 UTC m=+1190.020139284" observedRunningTime="2026-01-22 10:56:38.362549734 +0000 UTC m=+1191.020585844" watchObservedRunningTime="2026-01-22 10:56:38.372288981 +0000 UTC m=+1191.030325091" Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.364693 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bb9bdd9dd-tqvz2" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-api" containerID="cri-o://debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f" gracePeriod=30 Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.366163 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54575f849-qtpk8" event={"ID":"17be82a9-4845-4b3d-85ca-82489d4687b2","Type":"ContainerStarted","Data":"9a336825b3c3be825579e39f07b5b3a716932f274430dc2e17b4ed341be95792"} Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.366196 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.366207 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54575f849-qtpk8" event={"ID":"17be82a9-4845-4b3d-85ca-82489d4687b2","Type":"ContainerStarted","Data":"9692d9c3c4b35d78e5b946311fb83054f46de9c3912ba7df3fcfa1deb172ff12"} Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.366216 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54575f849-qtpk8" event={"ID":"17be82a9-4845-4b3d-85ca-82489d4687b2","Type":"ContainerStarted","Data":"cdfc0ac1c896841b45eca832b9a7feb4ccd210a31dd24e274619a78b0dd379a6"} Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.367603 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bb9bdd9dd-tqvz2" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-httpd" containerID="cri-o://b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347" gracePeriod=30 Jan 22 10:56:39 crc kubenswrapper[4975]: I0122 10:56:39.406847 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54575f849-qtpk8" podStartSLOduration=2.406833318 podStartE2EDuration="2.406833318s" podCreationTimestamp="2026-01-22 10:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:39.401786155 +0000 UTC m=+1192.059822255" watchObservedRunningTime="2026-01-22 10:56:39.406833318 +0000 UTC m=+1192.064869428" Jan 22 10:56:40 crc kubenswrapper[4975]: I0122 10:56:40.381603 4975 generic.go:334] "Generic (PLEG): container finished" podID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerID="b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347" exitCode=0 Jan 22 10:56:40 crc kubenswrapper[4975]: I0122 10:56:40.382753 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bb9bdd9dd-tqvz2" event={"ID":"a01d16cc-a78f-4c0c-9893-6739b85eb257","Type":"ContainerDied","Data":"b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347"} Jan 22 10:56:42 crc kubenswrapper[4975]: I0122 10:56:42.398561 4975 generic.go:334] "Generic (PLEG): container finished" podID="abbc90ac-2417-470d-84cf-ddc823f23e50" containerID="9069ee30524a475c7fcf94d4e779314e50eda7feb4961d18af59b94b2d3b53d5" exitCode=0 Jan 22 10:56:42 crc kubenswrapper[4975]: I0122 10:56:42.398711 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-97z49" event={"ID":"abbc90ac-2417-470d-84cf-ddc823f23e50","Type":"ContainerDied","Data":"9069ee30524a475c7fcf94d4e779314e50eda7feb4961d18af59b94b2d3b53d5"} Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.096296 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-97z49" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.114864 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-combined-ca-bundle\") pod \"abbc90ac-2417-470d-84cf-ddc823f23e50\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.115001 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx7tx\" (UniqueName: \"kubernetes.io/projected/abbc90ac-2417-470d-84cf-ddc823f23e50-kube-api-access-xx7tx\") pod \"abbc90ac-2417-470d-84cf-ddc823f23e50\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.115053 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-db-sync-config-data\") pod \"abbc90ac-2417-470d-84cf-ddc823f23e50\" (UID: \"abbc90ac-2417-470d-84cf-ddc823f23e50\") " Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.121287 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "abbc90ac-2417-470d-84cf-ddc823f23e50" (UID: "abbc90ac-2417-470d-84cf-ddc823f23e50"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.140647 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbc90ac-2417-470d-84cf-ddc823f23e50-kube-api-access-xx7tx" (OuterVolumeSpecName: "kube-api-access-xx7tx") pod "abbc90ac-2417-470d-84cf-ddc823f23e50" (UID: "abbc90ac-2417-470d-84cf-ddc823f23e50"). InnerVolumeSpecName "kube-api-access-xx7tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.144312 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abbc90ac-2417-470d-84cf-ddc823f23e50" (UID: "abbc90ac-2417-470d-84cf-ddc823f23e50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.171506 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.224720 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.224763 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx7tx\" (UniqueName: \"kubernetes.io/projected/abbc90ac-2417-470d-84cf-ddc823f23e50-kube-api-access-xx7tx\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.224777 4975 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbc90ac-2417-470d-84cf-ddc823f23e50-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.234762 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4scm"] Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.234969 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerName="dnsmasq-dns" containerID="cri-o://39981e88845908fcd9acf9f663a562581c50ff8fd349f9eee6e7ab9fea527b29" gracePeriod=10 Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.443968 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-97z49" event={"ID":"abbc90ac-2417-470d-84cf-ddc823f23e50","Type":"ContainerDied","Data":"c353cb2d71877a9d96b5161f420e4b76cc23ffe9166de5646e9c65bcee6a9421"} Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.444026 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c353cb2d71877a9d96b5161f420e4b76cc23ffe9166de5646e9c65bcee6a9421" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.443987 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-97z49" Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.445660 4975 generic.go:334] "Generic (PLEG): container finished" podID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerID="39981e88845908fcd9acf9f663a562581c50ff8fd349f9eee6e7ab9fea527b29" exitCode=0 Jan 22 10:56:45 crc kubenswrapper[4975]: I0122 10:56:45.445701 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" event={"ID":"090fc6ba-2ecd-4dbd-8e2c-86112df32aef","Type":"ContainerDied","Data":"39981e88845908fcd9acf9f663a562581c50ff8fd349f9eee6e7ab9fea527b29"} Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.295498 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7f5cf4887-x2cb8"] Jan 22 10:56:46 crc kubenswrapper[4975]: E0122 10:56:46.296068 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbc90ac-2417-470d-84cf-ddc823f23e50" containerName="barbican-db-sync" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.296079 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbc90ac-2417-470d-84cf-ddc823f23e50" containerName="barbican-db-sync" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.296247 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbc90ac-2417-470d-84cf-ddc823f23e50" containerName="barbican-db-sync" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.297074 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.302081 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.302251 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ddvc7" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.302381 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.343312 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f5cf4887-x2cb8"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.347278 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqj2\" (UniqueName: \"kubernetes.io/projected/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-kube-api-access-lqqj2\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.347390 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-config-data-custom\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.347432 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-combined-ca-bundle\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.347475 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-logs\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.347515 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-config-data\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.395011 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-749cddb87d-66769"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.396459 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.398686 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.419808 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-749cddb87d-66769"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.438391 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hb7cq"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.456784 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465436 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-config-data\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465494 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-logs\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465625 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqqj2\" (UniqueName: \"kubernetes.io/projected/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-kube-api-access-lqqj2\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465722 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-config-data\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465779 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-config-data-custom\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465804 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdcbc\" (UniqueName: \"kubernetes.io/projected/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-kube-api-access-pdcbc\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465855 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-config-data-custom\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.465919 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-combined-ca-bundle\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.466046 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-logs\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.466106 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-combined-ca-bundle\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.468972 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-logs\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.488062 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-config-data\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.492966 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-config-data-custom\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.505795 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hb7cq"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.506356 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqqj2\" (UniqueName: \"kubernetes.io/projected/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-kube-api-access-lqqj2\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.509288 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2be393-e3cf-4356-9f2e-6334f7ee99cc-combined-ca-bundle\") pod \"barbican-worker-7f5cf4887-x2cb8\" (UID: \"4c2be393-e3cf-4356-9f2e-6334f7ee99cc\") " pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578365 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-config-data\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578688 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578715 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdcbc\" (UniqueName: \"kubernetes.io/projected/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-kube-api-access-pdcbc\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578737 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-config-data-custom\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578782 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578805 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578844 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-combined-ca-bundle\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578865 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578893 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-logs\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578909 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-config\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.578939 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l72nm\" (UniqueName: \"kubernetes.io/projected/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-kube-api-access-l72nm\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.582912 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-logs\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.586010 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-combined-ca-bundle\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.600603 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5545f57d98-4rk8s"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.602433 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-config-data\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.602762 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.605251 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdcbc\" (UniqueName: \"kubernetes.io/projected/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-kube-api-access-pdcbc\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.606888 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.610075 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5545f57d98-4rk8s"] Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.625863 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7922a3a-ab4e-4400-9da2-b01efcf11ca7-config-data-custom\") pod \"barbican-keystone-listener-749cddb87d-66769\" (UID: \"d7922a3a-ab4e-4400-9da2-b01efcf11ca7\") " pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.658462 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.659822 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f5cf4887-x2cb8" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680215 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680267 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrqv\" (UniqueName: \"kubernetes.io/projected/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-kube-api-access-txrqv\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680296 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-logs\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680319 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-combined-ca-bundle\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680357 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680376 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680400 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680437 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680463 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-config\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680489 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l72nm\" (UniqueName: \"kubernetes.io/projected/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-kube-api-access-l72nm\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.680510 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data-custom\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.681445 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.681734 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.681819 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.682136 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-config\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.682270 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.716750 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l72nm\" (UniqueName: \"kubernetes.io/projected/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-kube-api-access-l72nm\") pod \"dnsmasq-dns-85ff748b95-hb7cq\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.745544 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cb95f4cc6-jdv4m" podUID="d2027dc6-aa67-467c-ad8d-e31ccb6d66ee" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.756296 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-749cddb87d-66769" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.783136 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data-custom\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.783418 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txrqv\" (UniqueName: \"kubernetes.io/projected/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-kube-api-access-txrqv\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.783483 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-logs\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.783540 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-combined-ca-bundle\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.783973 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-logs\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.784209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.788841 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-combined-ca-bundle\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.789044 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data-custom\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.789846 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.804179 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txrqv\" (UniqueName: \"kubernetes.io/projected/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-kube-api-access-txrqv\") pod \"barbican-api-5545f57d98-4rk8s\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.886559 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:46 crc kubenswrapper[4975]: I0122 10:56:46.989202 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:47 crc kubenswrapper[4975]: I0122 10:56:47.933682 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.109581 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-sb\") pod \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.109651 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-config\") pod \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.109750 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-svc\") pod \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.109782 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b2kj\" (UniqueName: \"kubernetes.io/projected/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-kube-api-access-9b2kj\") pod \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.109881 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-nb\") pod \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.109901 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-swift-storage-0\") pod \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\" (UID: \"090fc6ba-2ecd-4dbd-8e2c-86112df32aef\") " Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.116162 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-kube-api-access-9b2kj" (OuterVolumeSpecName: "kube-api-access-9b2kj") pod "090fc6ba-2ecd-4dbd-8e2c-86112df32aef" (UID: "090fc6ba-2ecd-4dbd-8e2c-86112df32aef"). InnerVolumeSpecName "kube-api-access-9b2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.129525 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hb7cq"] Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.200564 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "090fc6ba-2ecd-4dbd-8e2c-86112df32aef" (UID: "090fc6ba-2ecd-4dbd-8e2c-86112df32aef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.204577 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "090fc6ba-2ecd-4dbd-8e2c-86112df32aef" (UID: "090fc6ba-2ecd-4dbd-8e2c-86112df32aef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.211903 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.211935 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b2kj\" (UniqueName: \"kubernetes.io/projected/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-kube-api-access-9b2kj\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.211947 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.260597 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-749cddb87d-66769"] Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.276181 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "090fc6ba-2ecd-4dbd-8e2c-86112df32aef" (UID: "090fc6ba-2ecd-4dbd-8e2c-86112df32aef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.286476 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-config" (OuterVolumeSpecName: "config") pod "090fc6ba-2ecd-4dbd-8e2c-86112df32aef" (UID: "090fc6ba-2ecd-4dbd-8e2c-86112df32aef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.300966 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "090fc6ba-2ecd-4dbd-8e2c-86112df32aef" (UID: "090fc6ba-2ecd-4dbd-8e2c-86112df32aef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.315665 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.315700 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.315710 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090fc6ba-2ecd-4dbd-8e2c-86112df32aef-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.319772 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f5cf4887-x2cb8"] Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.331887 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5545f57d98-4rk8s"] Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.549471 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5545f57d98-4rk8s" event={"ID":"5e5d1da3-0663-4a0d-984b-5e71e31a2e88","Type":"ContainerStarted","Data":"67b68859f33d50442674dda07c26d271187b8398123118af889ef4cc966a6df9"} Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.563857 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" event={"ID":"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b","Type":"ContainerStarted","Data":"10160ce8df405b1cc4e80cca212a7f0369413455a34215d9e1f589527e32eb53"} Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.565157 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f5cf4887-x2cb8" event={"ID":"4c2be393-e3cf-4356-9f2e-6334f7ee99cc","Type":"ContainerStarted","Data":"7fece12d9cafa719feaf48296b5479f150e63ec7587fe9dc3cd12852da5d4a8c"} Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.567376 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" event={"ID":"090fc6ba-2ecd-4dbd-8e2c-86112df32aef","Type":"ContainerDied","Data":"9e0ea78df671e36810356f60b5d6b014fa61dd78c9fbd1e26c9e3f554f2b4509"} Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.567418 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4scm" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.567439 4975 scope.go:117] "RemoveContainer" containerID="39981e88845908fcd9acf9f663a562581c50ff8fd349f9eee6e7ab9fea527b29" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.569543 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-749cddb87d-66769" event={"ID":"d7922a3a-ab4e-4400-9da2-b01efcf11ca7","Type":"ContainerStarted","Data":"f755828080bc13f14acd46204c1ca3d8a478d6ae4ffe5bf89c861e8fff823e01"} Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.589496 4975 scope.go:117] "RemoveContainer" containerID="bf733f359c0b2cf97aa4f5fde8d1157f7863448da61fa4471c124f7f404566a7" Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.606464 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4scm"] Jan 22 10:56:48 crc kubenswrapper[4975]: I0122 10:56:48.614363 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4scm"] Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.360282 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d584b4fcd-5llj7"] Jan 22 10:56:49 crc kubenswrapper[4975]: E0122 10:56:49.360600 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerName="init" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.360612 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerName="init" Jan 22 10:56:49 crc kubenswrapper[4975]: E0122 10:56:49.360620 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerName="dnsmasq-dns" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.360626 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerName="dnsmasq-dns" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.360841 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" containerName="dnsmasq-dns" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.362463 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.365023 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.365616 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.439819 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d584b4fcd-5llj7"] Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.538611 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa760241-75cd-4e21-8b47-5e367c1cff52-logs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.538669 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-internal-tls-certs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.538700 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-config-data\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.538764 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-public-tls-certs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.539552 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-combined-ca-bundle\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.539666 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-config-data-custom\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.539692 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2g8\" (UniqueName: \"kubernetes.io/projected/fa760241-75cd-4e21-8b47-5e367c1cff52-kube-api-access-vb2g8\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.585237 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x24jb" event={"ID":"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f","Type":"ContainerStarted","Data":"fa3f54390ce11f410f44674f2f87ab26787fac6ef23dfb8f27ec565a28a1eacf"} Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.589009 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerStarted","Data":"ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40"} Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.589235 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-central-agent" containerID="cri-o://dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85" gracePeriod=30 Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.589569 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.589595 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="sg-core" containerID="cri-o://543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336" gracePeriod=30 Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.589639 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="proxy-httpd" containerID="cri-o://ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40" gracePeriod=30 Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.589616 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-notification-agent" containerID="cri-o://31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de" gracePeriod=30 Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.611509 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5545f57d98-4rk8s" event={"ID":"5e5d1da3-0663-4a0d-984b-5e71e31a2e88","Type":"ContainerStarted","Data":"4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe"} Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.625093 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.999208067 podStartE2EDuration="1m2.625070965s" podCreationTimestamp="2026-01-22 10:55:47 +0000 UTC" firstStartedPulling="2026-01-22 10:55:49.068160801 +0000 UTC m=+1141.726196911" lastFinishedPulling="2026-01-22 10:56:47.694023699 +0000 UTC m=+1200.352059809" observedRunningTime="2026-01-22 10:56:49.611606638 +0000 UTC m=+1202.269642768" watchObservedRunningTime="2026-01-22 10:56:49.625070965 +0000 UTC m=+1202.283107085" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.626768 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" event={"ID":"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b","Type":"ContainerStarted","Data":"aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810"} Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641100 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-combined-ca-bundle\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641276 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-config-data-custom\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641320 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2g8\" (UniqueName: \"kubernetes.io/projected/fa760241-75cd-4e21-8b47-5e367c1cff52-kube-api-access-vb2g8\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641453 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa760241-75cd-4e21-8b47-5e367c1cff52-logs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641623 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-internal-tls-certs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641669 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-config-data\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.641695 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-public-tls-certs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.652102 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa760241-75cd-4e21-8b47-5e367c1cff52-logs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.652554 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-public-tls-certs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.653148 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-internal-tls-certs\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.654673 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-config-data-custom\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.665095 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-config-data\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.666817 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2g8\" (UniqueName: \"kubernetes.io/projected/fa760241-75cd-4e21-8b47-5e367c1cff52-kube-api-access-vb2g8\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.667848 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa760241-75cd-4e21-8b47-5e367c1cff52-combined-ca-bundle\") pod \"barbican-api-7d584b4fcd-5llj7\" (UID: \"fa760241-75cd-4e21-8b47-5e367c1cff52\") " pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.681244 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:49 crc kubenswrapper[4975]: I0122 10:56:49.797229 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090fc6ba-2ecd-4dbd-8e2c-86112df32aef" path="/var/lib/kubelet/pods/090fc6ba-2ecd-4dbd-8e2c-86112df32aef/volumes" Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.242749 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d584b4fcd-5llj7"] Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.636988 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5545f57d98-4rk8s" event={"ID":"5e5d1da3-0663-4a0d-984b-5e71e31a2e88","Type":"ContainerStarted","Data":"87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc"} Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.637054 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.637075 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.639910 4975 generic.go:334] "Generic (PLEG): container finished" podID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerID="aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810" exitCode=0 Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.639988 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" event={"ID":"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b","Type":"ContainerDied","Data":"aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810"} Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.640018 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" event={"ID":"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b","Type":"ContainerStarted","Data":"0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629"} Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.640066 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.645288 4975 generic.go:334] "Generic (PLEG): container finished" podID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerID="ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40" exitCode=0 Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.645323 4975 generic.go:334] "Generic (PLEG): container finished" podID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerID="543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336" exitCode=2 Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.645354 4975 generic.go:334] "Generic (PLEG): container finished" podID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerID="dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85" exitCode=0 Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.645360 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerDied","Data":"ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40"} Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.645391 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerDied","Data":"543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336"} Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.645405 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerDied","Data":"dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85"} Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.666350 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5545f57d98-4rk8s" podStartSLOduration=4.666318394 podStartE2EDuration="4.666318394s" podCreationTimestamp="2026-01-22 10:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:50.665063194 +0000 UTC m=+1203.323099304" watchObservedRunningTime="2026-01-22 10:56:50.666318394 +0000 UTC m=+1203.324354504" Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.693052 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x24jb" podStartSLOduration=4.92442511 podStartE2EDuration="1m3.693035853s" podCreationTimestamp="2026-01-22 10:55:47 +0000 UTC" firstStartedPulling="2026-01-22 10:55:48.909936421 +0000 UTC m=+1141.567972521" lastFinishedPulling="2026-01-22 10:56:47.678547164 +0000 UTC m=+1200.336583264" observedRunningTime="2026-01-22 10:56:50.68626632 +0000 UTC m=+1203.344302430" watchObservedRunningTime="2026-01-22 10:56:50.693035853 +0000 UTC m=+1203.351071963" Jan 22 10:56:50 crc kubenswrapper[4975]: I0122 10:56:50.706216 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" podStartSLOduration=4.706207583 podStartE2EDuration="4.706207583s" podCreationTimestamp="2026-01-22 10:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:50.705193338 +0000 UTC m=+1203.363229448" watchObservedRunningTime="2026-01-22 10:56:50.706207583 +0000 UTC m=+1203.364243693" Jan 22 10:56:54 crc kubenswrapper[4975]: I0122 10:56:54.695640 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d584b4fcd-5llj7" event={"ID":"fa760241-75cd-4e21-8b47-5e367c1cff52","Type":"ContainerStarted","Data":"e15aff5cf0afc681b3f576b13ccd718d22181968cebd4357e64b8dad664bf9b0"} Jan 22 10:56:54 crc kubenswrapper[4975]: I0122 10:56:54.696304 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d584b4fcd-5llj7" event={"ID":"fa760241-75cd-4e21-8b47-5e367c1cff52","Type":"ContainerStarted","Data":"69206bc9d4006ebbf7e13f3b6f1a6c991ecd8227bc315a83912596872c80ac96"} Jan 22 10:56:54 crc kubenswrapper[4975]: I0122 10:56:54.709756 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f5cf4887-x2cb8" event={"ID":"4c2be393-e3cf-4356-9f2e-6334f7ee99cc","Type":"ContainerStarted","Data":"09cf0e1f896d51b48ccfcbb96236f16c07b3202aec35ea0f9203d1b1df0afc3a"} Jan 22 10:56:54 crc kubenswrapper[4975]: I0122 10:56:54.712692 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-749cddb87d-66769" event={"ID":"d7922a3a-ab4e-4400-9da2-b01efcf11ca7","Type":"ContainerStarted","Data":"0037774f695c7485e03b3ca854611a138a1e2472677b22e5dabea8bf45a9f433"} Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.167863 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189572 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76rdw\" (UniqueName: \"kubernetes.io/projected/420d2506-62c2-4217-bab2-0ea643c9d8cb-kube-api-access-76rdw\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189614 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-config-data\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189635 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-sg-core-conf-yaml\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189681 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-combined-ca-bundle\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189758 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-scripts\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189780 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-run-httpd\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.189839 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-log-httpd\") pod \"420d2506-62c2-4217-bab2-0ea643c9d8cb\" (UID: \"420d2506-62c2-4217-bab2-0ea643c9d8cb\") " Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.190697 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.196635 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.197036 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420d2506-62c2-4217-bab2-0ea643c9d8cb-kube-api-access-76rdw" (OuterVolumeSpecName: "kube-api-access-76rdw") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "kube-api-access-76rdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.197120 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-scripts" (OuterVolumeSpecName: "scripts") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.217617 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.273423 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.292129 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76rdw\" (UniqueName: \"kubernetes.io/projected/420d2506-62c2-4217-bab2-0ea643c9d8cb-kube-api-access-76rdw\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.292160 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.292191 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.292200 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.292209 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.292217 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420d2506-62c2-4217-bab2-0ea643c9d8cb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.297159 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-config-data" (OuterVolumeSpecName: "config-data") pod "420d2506-62c2-4217-bab2-0ea643c9d8cb" (UID: "420d2506-62c2-4217-bab2-0ea643c9d8cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.393902 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420d2506-62c2-4217-bab2-0ea643c9d8cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.748063 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f5cf4887-x2cb8" event={"ID":"4c2be393-e3cf-4356-9f2e-6334f7ee99cc","Type":"ContainerStarted","Data":"4a34ade0c6d1ece1ef4ecfc83f9a5f0a30cb425f0aca33cdf682800272385c9f"} Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.753732 4975 generic.go:334] "Generic (PLEG): container finished" podID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerID="31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de" exitCode=0 Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.753869 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerDied","Data":"31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de"} Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.753950 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420d2506-62c2-4217-bab2-0ea643c9d8cb","Type":"ContainerDied","Data":"48d4baefe552b0defdaed3653129918a76b3853e23c8fc1a098d7ec2f3f70582"} Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.754040 4975 scope.go:117] "RemoveContainer" containerID="ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.754225 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.778749 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7f5cf4887-x2cb8" podStartSLOduration=3.750521769 podStartE2EDuration="9.778733309s" podCreationTimestamp="2026-01-22 10:56:46 +0000 UTC" firstStartedPulling="2026-01-22 10:56:48.326466918 +0000 UTC m=+1200.984503028" lastFinishedPulling="2026-01-22 10:56:54.354678458 +0000 UTC m=+1207.012714568" observedRunningTime="2026-01-22 10:56:55.766393889 +0000 UTC m=+1208.424430029" watchObservedRunningTime="2026-01-22 10:56:55.778733309 +0000 UTC m=+1208.436769419" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.786862 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.788176 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.788273 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-749cddb87d-66769" event={"ID":"d7922a3a-ab4e-4400-9da2-b01efcf11ca7","Type":"ContainerStarted","Data":"e0b1b9937992c32b3815a1d06d3b4b9d5d24e82be15f43ac06fea55759418e61"} Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.788358 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d584b4fcd-5llj7" event={"ID":"fa760241-75cd-4e21-8b47-5e367c1cff52","Type":"ContainerStarted","Data":"662ac7fca8efa94eb4b639963c5d6d3f72134948c4958b47961dba818cdbf236"} Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.789167 4975 scope.go:117] "RemoveContainer" containerID="543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.831398 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-749cddb87d-66769" podStartSLOduration=3.753099591 podStartE2EDuration="9.831380956s" podCreationTimestamp="2026-01-22 10:56:46 +0000 UTC" firstStartedPulling="2026-01-22 10:56:48.275353088 +0000 UTC m=+1200.933389198" lastFinishedPulling="2026-01-22 10:56:54.353634463 +0000 UTC m=+1207.011670563" observedRunningTime="2026-01-22 10:56:55.816627639 +0000 UTC m=+1208.474663749" watchObservedRunningTime="2026-01-22 10:56:55.831380956 +0000 UTC m=+1208.489417066" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.850858 4975 scope.go:117] "RemoveContainer" containerID="31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.857892 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.918929 4975 scope.go:117] "RemoveContainer" containerID="dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.924525 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.936417 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.936764 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="proxy-httpd" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.936781 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="proxy-httpd" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.936810 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-notification-agent" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.936816 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-notification-agent" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.936824 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-central-agent" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.936829 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-central-agent" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.936838 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="sg-core" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.936843 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="sg-core" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.936997 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-notification-agent" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.937009 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="sg-core" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.937021 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="proxy-httpd" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.937034 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" containerName="ceilometer-central-agent" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.938749 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.941012 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.945392 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.961362 4975 scope.go:117] "RemoveContainer" containerID="ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.961792 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40\": container with ID starting with ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40 not found: ID does not exist" containerID="ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.961825 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40"} err="failed to get container status \"ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40\": rpc error: code = NotFound desc = could not find container \"ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40\": container with ID starting with ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40 not found: ID does not exist" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.961848 4975 scope.go:117] "RemoveContainer" containerID="543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.962064 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336\": container with ID starting with 543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336 not found: ID does not exist" containerID="543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.962080 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336"} err="failed to get container status \"543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336\": rpc error: code = NotFound desc = could not find container \"543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336\": container with ID starting with 543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336 not found: ID does not exist" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.962091 4975 scope.go:117] "RemoveContainer" containerID="31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.962267 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de\": container with ID starting with 31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de not found: ID does not exist" containerID="31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.962280 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de"} err="failed to get container status \"31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de\": rpc error: code = NotFound desc = could not find container \"31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de\": container with ID starting with 31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de not found: ID does not exist" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.962291 4975 scope.go:117] "RemoveContainer" containerID="dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85" Jan 22 10:56:55 crc kubenswrapper[4975]: E0122 10:56:55.962566 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85\": container with ID starting with dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85 not found: ID does not exist" containerID="dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.962582 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85"} err="failed to get container status \"dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85\": rpc error: code = NotFound desc = could not find container \"dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85\": container with ID starting with dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85 not found: ID does not exist" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.963119 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d584b4fcd-5llj7" podStartSLOduration=6.963110193 podStartE2EDuration="6.963110193s" podCreationTimestamp="2026-01-22 10:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:56:55.867922093 +0000 UTC m=+1208.525958213" watchObservedRunningTime="2026-01-22 10:56:55.963110193 +0000 UTC m=+1208.621146303" Jan 22 10:56:55 crc kubenswrapper[4975]: I0122 10:56:55.979366 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.123966 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-log-httpd\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.124098 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-run-httpd\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.124148 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfvx6\" (UniqueName: \"kubernetes.io/projected/fb8f805f-609a-4a60-a09a-b4da8360c470-kube-api-access-zfvx6\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.124198 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.124310 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.124367 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-scripts\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.124388 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-config-data\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225398 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-log-httpd\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225478 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-run-httpd\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225505 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfvx6\" (UniqueName: \"kubernetes.io/projected/fb8f805f-609a-4a60-a09a-b4da8360c470-kube-api-access-zfvx6\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225533 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225587 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225609 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-scripts\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.225627 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-config-data\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.227310 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-log-httpd\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.227539 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-run-httpd\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.230947 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.231170 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-config-data\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.234866 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-scripts\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.235468 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.257070 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfvx6\" (UniqueName: \"kubernetes.io/projected/fb8f805f-609a-4a60-a09a-b4da8360c470-kube-api-access-zfvx6\") pod \"ceilometer-0\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.429181 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.602596 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.638843 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-scripts\") pod \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.638976 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f699\" (UniqueName: \"kubernetes.io/projected/2a762d84-4b11-4b4b-91e1-51e43be6ff86-kube-api-access-4f699\") pod \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.639019 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-config-data\") pod \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.639047 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a762d84-4b11-4b4b-91e1-51e43be6ff86-logs\") pod \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.639083 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a762d84-4b11-4b4b-91e1-51e43be6ff86-horizon-secret-key\") pod \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\" (UID: \"2a762d84-4b11-4b4b-91e1-51e43be6ff86\") " Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.641482 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a762d84-4b11-4b4b-91e1-51e43be6ff86-logs" (OuterVolumeSpecName: "logs") pod "2a762d84-4b11-4b4b-91e1-51e43be6ff86" (UID: "2a762d84-4b11-4b4b-91e1-51e43be6ff86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.645712 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a762d84-4b11-4b4b-91e1-51e43be6ff86-kube-api-access-4f699" (OuterVolumeSpecName: "kube-api-access-4f699") pod "2a762d84-4b11-4b4b-91e1-51e43be6ff86" (UID: "2a762d84-4b11-4b4b-91e1-51e43be6ff86"). InnerVolumeSpecName "kube-api-access-4f699". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.646642 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a762d84-4b11-4b4b-91e1-51e43be6ff86-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2a762d84-4b11-4b4b-91e1-51e43be6ff86" (UID: "2a762d84-4b11-4b4b-91e1-51e43be6ff86"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.670233 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-scripts" (OuterVolumeSpecName: "scripts") pod "2a762d84-4b11-4b4b-91e1-51e43be6ff86" (UID: "2a762d84-4b11-4b4b-91e1-51e43be6ff86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.670293 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-config-data" (OuterVolumeSpecName: "config-data") pod "2a762d84-4b11-4b4b-91e1-51e43be6ff86" (UID: "2a762d84-4b11-4b4b-91e1-51e43be6ff86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.742480 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.742508 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f699\" (UniqueName: \"kubernetes.io/projected/2a762d84-4b11-4b4b-91e1-51e43be6ff86-kube-api-access-4f699\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.742517 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a762d84-4b11-4b4b-91e1-51e43be6ff86-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.742543 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a762d84-4b11-4b4b-91e1-51e43be6ff86-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.742553 4975 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2a762d84-4b11-4b4b-91e1-51e43be6ff86-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.785677 4975 generic.go:334] "Generic (PLEG): container finished" podID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" containerID="fa3f54390ce11f410f44674f2f87ab26787fac6ef23dfb8f27ec565a28a1eacf" exitCode=0 Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.785746 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x24jb" event={"ID":"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f","Type":"ContainerDied","Data":"fa3f54390ce11f410f44674f2f87ab26787fac6ef23dfb8f27ec565a28a1eacf"} Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788437 4975 generic.go:334] "Generic (PLEG): container finished" podID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerID="c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca" exitCode=137 Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788458 4975 generic.go:334] "Generic (PLEG): container finished" podID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerID="2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7" exitCode=137 Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788494 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b986d67d5-jkmkh" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788499 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b986d67d5-jkmkh" event={"ID":"2a762d84-4b11-4b4b-91e1-51e43be6ff86","Type":"ContainerDied","Data":"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca"} Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788589 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b986d67d5-jkmkh" event={"ID":"2a762d84-4b11-4b4b-91e1-51e43be6ff86","Type":"ContainerDied","Data":"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7"} Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788600 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b986d67d5-jkmkh" event={"ID":"2a762d84-4b11-4b4b-91e1-51e43be6ff86","Type":"ContainerDied","Data":"b56162b1c92adaaf0b0fdd13e949146b5b7ebfd3a9f5d67d0664a4a71eff7b5d"} Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.788615 4975 scope.go:117] "RemoveContainer" containerID="c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.831777 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b986d67d5-jkmkh"] Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.840975 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b986d67d5-jkmkh"] Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.890079 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.910973 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.977539 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bt7hf"] Jan 22 10:56:56 crc kubenswrapper[4975]: I0122 10:56:56.977772 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerName="dnsmasq-dns" containerID="cri-o://15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075" gracePeriod=10 Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.016621 4975 scope.go:117] "RemoveContainer" containerID="2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.111855 4975 scope.go:117] "RemoveContainer" containerID="c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca" Jan 22 10:56:57 crc kubenswrapper[4975]: E0122 10:56:57.114953 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca\": container with ID starting with c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca not found: ID does not exist" containerID="c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.115008 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca"} err="failed to get container status \"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca\": rpc error: code = NotFound desc = could not find container \"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca\": container with ID starting with c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca not found: ID does not exist" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.115045 4975 scope.go:117] "RemoveContainer" containerID="2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7" Jan 22 10:56:57 crc kubenswrapper[4975]: E0122 10:56:57.120144 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7\": container with ID starting with 2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7 not found: ID does not exist" containerID="2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.120211 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7"} err="failed to get container status \"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7\": rpc error: code = NotFound desc = could not find container \"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7\": container with ID starting with 2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7 not found: ID does not exist" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.120238 4975 scope.go:117] "RemoveContainer" containerID="c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.120780 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca"} err="failed to get container status \"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca\": rpc error: code = NotFound desc = could not find container \"c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca\": container with ID starting with c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca not found: ID does not exist" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.120822 4975 scope.go:117] "RemoveContainer" containerID="2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.121348 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7"} err="failed to get container status \"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7\": rpc error: code = NotFound desc = could not find container \"2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7\": container with ID starting with 2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7 not found: ID does not exist" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.584311 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.658969 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-config\") pod \"40dba24f-eb65-4337-b3e8-6498018ae0cd\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.659278 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctwkm\" (UniqueName: \"kubernetes.io/projected/40dba24f-eb65-4337-b3e8-6498018ae0cd-kube-api-access-ctwkm\") pod \"40dba24f-eb65-4337-b3e8-6498018ae0cd\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.659372 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-svc\") pod \"40dba24f-eb65-4337-b3e8-6498018ae0cd\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.659403 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-swift-storage-0\") pod \"40dba24f-eb65-4337-b3e8-6498018ae0cd\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.659476 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-nb\") pod \"40dba24f-eb65-4337-b3e8-6498018ae0cd\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.659636 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-sb\") pod \"40dba24f-eb65-4337-b3e8-6498018ae0cd\" (UID: \"40dba24f-eb65-4337-b3e8-6498018ae0cd\") " Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.673721 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40dba24f-eb65-4337-b3e8-6498018ae0cd-kube-api-access-ctwkm" (OuterVolumeSpecName: "kube-api-access-ctwkm") pod "40dba24f-eb65-4337-b3e8-6498018ae0cd" (UID: "40dba24f-eb65-4337-b3e8-6498018ae0cd"). InnerVolumeSpecName "kube-api-access-ctwkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.709874 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40dba24f-eb65-4337-b3e8-6498018ae0cd" (UID: "40dba24f-eb65-4337-b3e8-6498018ae0cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.715921 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40dba24f-eb65-4337-b3e8-6498018ae0cd" (UID: "40dba24f-eb65-4337-b3e8-6498018ae0cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.739877 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40dba24f-eb65-4337-b3e8-6498018ae0cd" (UID: "40dba24f-eb65-4337-b3e8-6498018ae0cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.742068 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40dba24f-eb65-4337-b3e8-6498018ae0cd" (UID: "40dba24f-eb65-4337-b3e8-6498018ae0cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.765558 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.765609 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctwkm\" (UniqueName: \"kubernetes.io/projected/40dba24f-eb65-4337-b3e8-6498018ae0cd-kube-api-access-ctwkm\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.765623 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.765636 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.765654 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.798048 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-config" (OuterVolumeSpecName: "config") pod "40dba24f-eb65-4337-b3e8-6498018ae0cd" (UID: "40dba24f-eb65-4337-b3e8-6498018ae0cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.798588 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" path="/var/lib/kubelet/pods/2a762d84-4b11-4b4b-91e1-51e43be6ff86/volumes" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.799218 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420d2506-62c2-4217-bab2-0ea643c9d8cb" path="/var/lib/kubelet/pods/420d2506-62c2-4217-bab2-0ea643c9d8cb/volumes" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.817362 4975 generic.go:334] "Generic (PLEG): container finished" podID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerID="15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075" exitCode=0 Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.817529 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.818305 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" event={"ID":"40dba24f-eb65-4337-b3e8-6498018ae0cd","Type":"ContainerDied","Data":"15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075"} Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.818343 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bt7hf" event={"ID":"40dba24f-eb65-4337-b3e8-6498018ae0cd","Type":"ContainerDied","Data":"03bf6c3c89deb9815bd23aea1ad006734d2417ca2f3ed74399836dec59563aa4"} Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.818361 4975 scope.go:117] "RemoveContainer" containerID="15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.823795 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerStarted","Data":"977e454cfce5d639d79c511be053129cad12c5451c2d7def3d7821bf68c1b393"} Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.840615 4975 scope.go:117] "RemoveContainer" containerID="434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.842178 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bt7hf"] Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.848505 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bt7hf"] Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.864730 4975 scope.go:117] "RemoveContainer" containerID="15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075" Jan 22 10:56:57 crc kubenswrapper[4975]: E0122 10:56:57.868621 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075\": container with ID starting with 15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075 not found: ID does not exist" containerID="15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.868669 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075"} err="failed to get container status \"15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075\": rpc error: code = NotFound desc = could not find container \"15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075\": container with ID starting with 15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075 not found: ID does not exist" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.868701 4975 scope.go:117] "RemoveContainer" containerID="434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369" Jan 22 10:56:57 crc kubenswrapper[4975]: E0122 10:56:57.869009 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369\": container with ID starting with 434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369 not found: ID does not exist" containerID="434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.869031 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369"} err="failed to get container status \"434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369\": rpc error: code = NotFound desc = could not find container \"434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369\": container with ID starting with 434a570b129e27e3fa6633bc014d89705d1b0bce6d562eb96626e926d53a2369 not found: ID does not exist" Jan 22 10:56:57 crc kubenswrapper[4975]: I0122 10:56:57.870463 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dba24f-eb65-4337-b3e8-6498018ae0cd-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.103914 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x24jb" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.175552 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-etc-machine-id\") pod \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.176072 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-config-data\") pod \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.176165 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-scripts\") pod \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.176272 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnl69\" (UniqueName: \"kubernetes.io/projected/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-kube-api-access-qnl69\") pod \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.176416 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-combined-ca-bundle\") pod \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.176535 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-db-sync-config-data\") pod \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\" (UID: \"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f\") " Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.175713 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" (UID: "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.190290 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-scripts" (OuterVolumeSpecName: "scripts") pod "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" (UID: "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.190578 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" (UID: "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.203103 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-kube-api-access-qnl69" (OuterVolumeSpecName: "kube-api-access-qnl69") pod "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" (UID: "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f"). InnerVolumeSpecName "kube-api-access-qnl69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.236545 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" (UID: "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.244546 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-config-data" (OuterVolumeSpecName: "config-data") pod "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" (UID: "f58e68a6-dee2-49a4-be2d-e6b1ca69b19f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.278345 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.278382 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnl69\" (UniqueName: \"kubernetes.io/projected/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-kube-api-access-qnl69\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.278393 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.278405 4975 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.278431 4975 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.278440 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.580626 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.656645 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.749178 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.843614 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x24jb" event={"ID":"f58e68a6-dee2-49a4-be2d-e6b1ca69b19f","Type":"ContainerDied","Data":"b9142cac76f6745609a714d1e015b5c611624dc75b88fc27b0bbb6b42d0ce9bd"} Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.843651 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9142cac76f6745609a714d1e015b5c611624dc75b88fc27b0bbb6b42d0ce9bd" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.843678 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x24jb" Jan 22 10:56:58 crc kubenswrapper[4975]: I0122 10:56:58.849407 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerStarted","Data":"382e6447c53a2d9603ed67056271c7ef5de19fb331dfdff2fe14bd08ebfa8227"} Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.066120 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:56:59 crc kubenswrapper[4975]: E0122 10:56:59.067951 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" containerName="cinder-db-sync" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.067981 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" containerName="cinder-db-sync" Jan 22 10:56:59 crc kubenswrapper[4975]: E0122 10:56:59.068019 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon-log" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068025 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon-log" Jan 22 10:56:59 crc kubenswrapper[4975]: E0122 10:56:59.068033 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068039 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon" Jan 22 10:56:59 crc kubenswrapper[4975]: E0122 10:56:59.068051 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerName="init" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068056 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerName="init" Jan 22 10:56:59 crc kubenswrapper[4975]: E0122 10:56:59.068068 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerName="dnsmasq-dns" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068074 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerName="dnsmasq-dns" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068275 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon-log" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068286 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" containerName="dnsmasq-dns" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068295 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" containerName="cinder-db-sync" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.068307 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a762d84-4b11-4b4b-91e1-51e43be6ff86" containerName="horizon" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.079988 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.082997 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k2tzk" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.084557 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.086321 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.086467 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.086596 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.202436 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.202794 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbgz\" (UniqueName: \"kubernetes.io/projected/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-kube-api-access-rcbgz\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.202822 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.202837 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.202870 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.202885 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.304768 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbgz\" (UniqueName: \"kubernetes.io/projected/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-kube-api-access-rcbgz\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.304817 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.304835 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.304870 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.304890 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.304952 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.305435 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.319909 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.319915 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.323191 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.331757 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.348588 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vm7w6"] Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.350046 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.358828 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbgz\" (UniqueName: \"kubernetes.io/projected/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-kube-api-access-rcbgz\") pod \"cinder-scheduler-0\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.361747 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vm7w6"] Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.429182 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.479753 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.481632 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.485580 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.496056 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532524 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532565 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-scripts\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532596 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532627 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532650 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-config\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532678 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9303a4bb-59d1-4a47-afe3-0275a2334e9e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532703 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9303a4bb-59d1-4a47-afe3-0275a2334e9e-logs\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532719 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532732 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data-custom\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532750 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdwl\" (UniqueName: \"kubernetes.io/projected/9303a4bb-59d1-4a47-afe3-0275a2334e9e-kube-api-access-ttdwl\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532770 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532819 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.532838 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gtqn\" (UniqueName: \"kubernetes.io/projected/774f6271-a856-40ae-a082-f27ef97f7582-kube-api-access-8gtqn\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.633749 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634147 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634191 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gtqn\" (UniqueName: \"kubernetes.io/projected/774f6271-a856-40ae-a082-f27ef97f7582-kube-api-access-8gtqn\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634247 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634265 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-scripts\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634294 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634323 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634368 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-config\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634398 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9303a4bb-59d1-4a47-afe3-0275a2334e9e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634421 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9303a4bb-59d1-4a47-afe3-0275a2334e9e-logs\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634438 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634452 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data-custom\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634471 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdwl\" (UniqueName: \"kubernetes.io/projected/9303a4bb-59d1-4a47-afe3-0275a2334e9e-kube-api-access-ttdwl\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634490 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.634703 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9303a4bb-59d1-4a47-afe3-0275a2334e9e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.635581 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.635643 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9303a4bb-59d1-4a47-afe3-0275a2334e9e-logs\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.636079 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.639828 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-config\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.641749 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.642175 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.642962 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.643381 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.645022 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data-custom\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.658640 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdwl\" (UniqueName: \"kubernetes.io/projected/9303a4bb-59d1-4a47-afe3-0275a2334e9e-kube-api-access-ttdwl\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.659782 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-scripts\") pod \"cinder-api-0\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.670015 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gtqn\" (UniqueName: \"kubernetes.io/projected/774f6271-a856-40ae-a082-f27ef97f7582-kube-api-access-8gtqn\") pod \"dnsmasq-dns-5c9776ccc5-vm7w6\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.772399 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40dba24f-eb65-4337-b3e8-6498018ae0cd" path="/var/lib/kubelet/pods/40dba24f-eb65-4337-b3e8-6498018ae0cd/volumes" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.798157 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.827977 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.862823 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerStarted","Data":"a95cac686e5dac6786e775d42ad6fbc775ed63671230bacfec1bbcbb3e353df0"} Jan 22 10:56:59 crc kubenswrapper[4975]: I0122 10:56:59.993184 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.385016 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.454126 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vm7w6"] Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.780856 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.877023 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9303a4bb-59d1-4a47-afe3-0275a2334e9e","Type":"ContainerStarted","Data":"57995a093c86ec72c93514914135eaf361c6a7d1019b3f955de1db88c7715bc1"} Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.878744 4975 generic.go:334] "Generic (PLEG): container finished" podID="774f6271-a856-40ae-a082-f27ef97f7582" containerID="5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662" exitCode=0 Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.878794 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" event={"ID":"774f6271-a856-40ae-a082-f27ef97f7582","Type":"ContainerDied","Data":"5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662"} Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.878813 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" event={"ID":"774f6271-a856-40ae-a082-f27ef97f7582","Type":"ContainerStarted","Data":"3e3fe37cbe12a6f8e4616941367b5231d26e58a3466d9aeef735d120ec1c9fd7"} Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.889663 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerStarted","Data":"24ce52effd2845187413e12425548153724069a77b1f2b64faa5c46c4dcfc156"} Jan 22 10:57:00 crc kubenswrapper[4975]: I0122 10:57:00.895866 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a","Type":"ContainerStarted","Data":"cbc8c181ad6e765a3984297041db98faf29f27746f001b40dc30d3b49b666c2a"} Jan 22 10:57:01 crc kubenswrapper[4975]: I0122 10:57:01.306141 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:01 crc kubenswrapper[4975]: I0122 10:57:01.814580 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:57:01 crc kubenswrapper[4975]: I0122 10:57:01.907363 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9303a4bb-59d1-4a47-afe3-0275a2334e9e","Type":"ContainerStarted","Data":"a04f9d2943d88f13c1dd17251d294aa9e4769c2b332bcca6076523b6dbe991de"} Jan 22 10:57:01 crc kubenswrapper[4975]: I0122 10:57:01.915837 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" event={"ID":"774f6271-a856-40ae-a082-f27ef97f7582","Type":"ContainerStarted","Data":"0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9"} Jan 22 10:57:01 crc kubenswrapper[4975]: I0122 10:57:01.916994 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:57:01 crc kubenswrapper[4975]: I0122 10:57:01.952921 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" podStartSLOduration=2.952903731 podStartE2EDuration="2.952903731s" podCreationTimestamp="2026-01-22 10:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:01.943265857 +0000 UTC m=+1214.601301967" watchObservedRunningTime="2026-01-22 10:57:01.952903731 +0000 UTC m=+1214.610939841" Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.120864 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cb95f4cc6-jdv4m" Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.191915 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bbdd45bdd-ckdgl"] Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.192502 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon-log" containerID="cri-o://39b1484aa175e8832a4a3a02dba23da796deff516ab271d29c22c353de6d296d" gracePeriod=30 Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.192930 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" containerID="cri-o://e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169" gracePeriod=30 Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.924904 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a","Type":"ContainerStarted","Data":"8cfe09292b3fd1e15ecfa911978c435900f06cb261e56178529502cd65a47887"} Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.926701 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9303a4bb-59d1-4a47-afe3-0275a2334e9e","Type":"ContainerStarted","Data":"6f38ca4ee298482e071b0283f14dbcb92da910c30eb6524f8f496ca9fc3e5d70"} Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.926820 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api-log" containerID="cri-o://a04f9d2943d88f13c1dd17251d294aa9e4769c2b332bcca6076523b6dbe991de" gracePeriod=30 Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.926897 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.927197 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api" containerID="cri-o://6f38ca4ee298482e071b0283f14dbcb92da910c30eb6524f8f496ca9fc3e5d70" gracePeriod=30 Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.933785 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerStarted","Data":"1e78aa99b48f9a01e9fb62997007a695ced9c744c3b2704f07774e1fcd5ddf1a"} Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.947882 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.9478683180000003 podStartE2EDuration="3.947868318s" podCreationTimestamp="2026-01-22 10:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:02.946459024 +0000 UTC m=+1215.604495134" watchObservedRunningTime="2026-01-22 10:57:02.947868318 +0000 UTC m=+1215.605904428" Jan 22 10:57:02 crc kubenswrapper[4975]: I0122 10:57:02.970962 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.29677804 podStartE2EDuration="7.970942598s" podCreationTimestamp="2026-01-22 10:56:55 +0000 UTC" firstStartedPulling="2026-01-22 10:56:57.117898599 +0000 UTC m=+1209.775934709" lastFinishedPulling="2026-01-22 10:57:01.792063167 +0000 UTC m=+1214.450099267" observedRunningTime="2026-01-22 10:57:02.969257887 +0000 UTC m=+1215.627294007" watchObservedRunningTime="2026-01-22 10:57:02.970942598 +0000 UTC m=+1215.628978708" Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.675190 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d584b4fcd-5llj7" Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.753189 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5545f57d98-4rk8s"] Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.754303 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5545f57d98-4rk8s" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api-log" containerID="cri-o://4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe" gracePeriod=30 Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.754629 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5545f57d98-4rk8s" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api" containerID="cri-o://87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc" gracePeriod=30 Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.760258 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5545f57d98-4rk8s" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.958760 4975 generic.go:334] "Generic (PLEG): container finished" podID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerID="4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe" exitCode=143 Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.958809 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5545f57d98-4rk8s" event={"ID":"5e5d1da3-0663-4a0d-984b-5e71e31a2e88","Type":"ContainerDied","Data":"4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe"} Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.960852 4975 generic.go:334] "Generic (PLEG): container finished" podID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerID="6f38ca4ee298482e071b0283f14dbcb92da910c30eb6524f8f496ca9fc3e5d70" exitCode=0 Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.960866 4975 generic.go:334] "Generic (PLEG): container finished" podID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerID="a04f9d2943d88f13c1dd17251d294aa9e4769c2b332bcca6076523b6dbe991de" exitCode=143 Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.960921 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9303a4bb-59d1-4a47-afe3-0275a2334e9e","Type":"ContainerDied","Data":"6f38ca4ee298482e071b0283f14dbcb92da910c30eb6524f8f496ca9fc3e5d70"} Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.960937 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9303a4bb-59d1-4a47-afe3-0275a2334e9e","Type":"ContainerDied","Data":"a04f9d2943d88f13c1dd17251d294aa9e4769c2b332bcca6076523b6dbe991de"} Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.964435 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a","Type":"ContainerStarted","Data":"32a69262dffb789e16ae2fee8dec0f65f794c1d355ca7310f167196d750699b7"} Jan 22 10:57:03 crc kubenswrapper[4975]: I0122 10:57:03.964455 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.185264 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.209522 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.020281186 podStartE2EDuration="5.209502077s" podCreationTimestamp="2026-01-22 10:56:59 +0000 UTC" firstStartedPulling="2026-01-22 10:57:00.079817393 +0000 UTC m=+1212.737853503" lastFinishedPulling="2026-01-22 10:57:01.269038284 +0000 UTC m=+1213.927074394" observedRunningTime="2026-01-22 10:57:03.997984323 +0000 UTC m=+1216.656020433" watchObservedRunningTime="2026-01-22 10:57:04.209502077 +0000 UTC m=+1216.867538187" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.309786 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdwl\" (UniqueName: \"kubernetes.io/projected/9303a4bb-59d1-4a47-afe3-0275a2334e9e-kube-api-access-ttdwl\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.309960 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9303a4bb-59d1-4a47-afe3-0275a2334e9e-etc-machine-id\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.310071 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9303a4bb-59d1-4a47-afe3-0275a2334e9e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.310274 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.310401 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-combined-ca-bundle\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.310597 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data-custom\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.311120 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-scripts\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.311272 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9303a4bb-59d1-4a47-afe3-0275a2334e9e-logs\") pod \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\" (UID: \"9303a4bb-59d1-4a47-afe3-0275a2334e9e\") " Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.311822 4975 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9303a4bb-59d1-4a47-afe3-0275a2334e9e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.312193 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9303a4bb-59d1-4a47-afe3-0275a2334e9e-logs" (OuterVolumeSpecName: "logs") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.323496 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9303a4bb-59d1-4a47-afe3-0275a2334e9e-kube-api-access-ttdwl" (OuterVolumeSpecName: "kube-api-access-ttdwl") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "kube-api-access-ttdwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.323853 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.336517 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-scripts" (OuterVolumeSpecName: "scripts") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.391500 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.404552 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data" (OuterVolumeSpecName: "config-data") pod "9303a4bb-59d1-4a47-afe3-0275a2334e9e" (UID: "9303a4bb-59d1-4a47-afe3-0275a2334e9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.415451 4975 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.415486 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.415499 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9303a4bb-59d1-4a47-afe3-0275a2334e9e-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.415508 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttdwl\" (UniqueName: \"kubernetes.io/projected/9303a4bb-59d1-4a47-afe3-0275a2334e9e-kube-api-access-ttdwl\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.415518 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.415526 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9303a4bb-59d1-4a47-afe3-0275a2334e9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.430412 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.976800 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.976959 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9303a4bb-59d1-4a47-afe3-0275a2334e9e","Type":"ContainerDied","Data":"57995a093c86ec72c93514914135eaf361c6a7d1019b3f955de1db88c7715bc1"} Jan 22 10:57:04 crc kubenswrapper[4975]: I0122 10:57:04.977916 4975 scope.go:117] "RemoveContainer" containerID="6f38ca4ee298482e071b0283f14dbcb92da910c30eb6524f8f496ca9fc3e5d70" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.025417 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.032425 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.041526 4975 scope.go:117] "RemoveContainer" containerID="a04f9d2943d88f13c1dd17251d294aa9e4769c2b332bcca6076523b6dbe991de" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.059967 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:05 crc kubenswrapper[4975]: E0122 10:57:05.064365 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api-log" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.064391 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api-log" Jan 22 10:57:05 crc kubenswrapper[4975]: E0122 10:57:05.064410 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.064416 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.064622 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.064633 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" containerName="cinder-api-log" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.065677 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.067930 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.068182 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.068479 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.095463 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.126806 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e51991-42f2-4b42-a988-73e2dd891b4d-logs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.126891 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-scripts\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.126927 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlc7h\" (UniqueName: \"kubernetes.io/projected/b0e51991-42f2-4b42-a988-73e2dd891b4d-kube-api-access-hlc7h\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.126972 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.127036 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0e51991-42f2-4b42-a988-73e2dd891b4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.127059 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-config-data\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.127093 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.127133 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.127157 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.144546 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6bb9bdd9dd-tqvz2" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.228769 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.228813 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.228876 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e51991-42f2-4b42-a988-73e2dd891b4d-logs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.228930 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-scripts\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.228961 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlc7h\" (UniqueName: \"kubernetes.io/projected/b0e51991-42f2-4b42-a988-73e2dd891b4d-kube-api-access-hlc7h\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.229004 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.229070 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0e51991-42f2-4b42-a988-73e2dd891b4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.229092 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-config-data\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.229121 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.230579 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0e51991-42f2-4b42-a988-73e2dd891b4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.230801 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e51991-42f2-4b42-a988-73e2dd891b4d-logs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.236057 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.238854 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.238997 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.239960 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-config-data\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.243367 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-scripts\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.253888 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlc7h\" (UniqueName: \"kubernetes.io/projected/b0e51991-42f2-4b42-a988-73e2dd891b4d-kube-api-access-hlc7h\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.253969 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e51991-42f2-4b42-a988-73e2dd891b4d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b0e51991-42f2-4b42-a988-73e2dd891b4d\") " pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.390688 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.722218 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.771421 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9303a4bb-59d1-4a47-afe3-0275a2334e9e" path="/var/lib/kubelet/pods/9303a4bb-59d1-4a47-afe3-0275a2334e9e/volumes" Jan 22 10:57:05 crc kubenswrapper[4975]: I0122 10:57:05.968198 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.018114 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b0e51991-42f2-4b42-a988-73e2dd891b4d","Type":"ContainerStarted","Data":"55a75bdd73a9d4ce71e1b2703dca8dd955738ba698644e506cf8664e01f96851"} Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.027132 4975 generic.go:334] "Generic (PLEG): container finished" podID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerID="e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169" exitCode=0 Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.027191 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbdd45bdd-ckdgl" event={"ID":"fc5d94b3-48d8-485a-b307-9a011d77ee76","Type":"ContainerDied","Data":"e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169"} Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.171583 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.554879 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85bb784b78-f8g8c" Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.654391 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.990028 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5545f57d98-4rk8s" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Jan 22 10:57:06 crc kubenswrapper[4975]: I0122 10:57:06.992175 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5545f57d98-4rk8s" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.055302 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b0e51991-42f2-4b42-a988-73e2dd891b4d","Type":"ContainerStarted","Data":"09e1d680c2cd326cbceebbd806a5b5b20928fd1ec995cc44cf6eec18a2ae9989"} Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.067076 4975 generic.go:334] "Generic (PLEG): container finished" podID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerID="87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc" exitCode=0 Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.067171 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5545f57d98-4rk8s" event={"ID":"5e5d1da3-0663-4a0d-984b-5e71e31a2e88","Type":"ContainerDied","Data":"87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc"} Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.319982 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.383120 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data-custom\") pod \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.383243 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-combined-ca-bundle\") pod \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.383356 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txrqv\" (UniqueName: \"kubernetes.io/projected/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-kube-api-access-txrqv\") pod \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.383499 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data\") pod \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.383552 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-logs\") pod \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\" (UID: \"5e5d1da3-0663-4a0d-984b-5e71e31a2e88\") " Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.384208 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-logs" (OuterVolumeSpecName: "logs") pod "5e5d1da3-0663-4a0d-984b-5e71e31a2e88" (UID: "5e5d1da3-0663-4a0d-984b-5e71e31a2e88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.391484 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e5d1da3-0663-4a0d-984b-5e71e31a2e88" (UID: "5e5d1da3-0663-4a0d-984b-5e71e31a2e88"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.401562 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-kube-api-access-txrqv" (OuterVolumeSpecName: "kube-api-access-txrqv") pod "5e5d1da3-0663-4a0d-984b-5e71e31a2e88" (UID: "5e5d1da3-0663-4a0d-984b-5e71e31a2e88"). InnerVolumeSpecName "kube-api-access-txrqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.427664 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e5d1da3-0663-4a0d-984b-5e71e31a2e88" (UID: "5e5d1da3-0663-4a0d-984b-5e71e31a2e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.481446 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data" (OuterVolumeSpecName: "config-data") pod "5e5d1da3-0663-4a0d-984b-5e71e31a2e88" (UID: "5e5d1da3-0663-4a0d-984b-5e71e31a2e88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.485384 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.485415 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txrqv\" (UniqueName: \"kubernetes.io/projected/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-kube-api-access-txrqv\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.485426 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.485436 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.485446 4975 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e5d1da3-0663-4a0d-984b-5e71e31a2e88-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.661393 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79786fd675-77kk9" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.813691 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54575f849-qtpk8" Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.899696 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d6d6969d8-6mpvr"] Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.899887 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d6d6969d8-6mpvr" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-api" containerID="cri-o://170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289" gracePeriod=30 Jan 22 10:57:07 crc kubenswrapper[4975]: I0122 10:57:07.900297 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d6d6969d8-6mpvr" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-httpd" containerID="cri-o://e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59" gracePeriod=30 Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.075210 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b0e51991-42f2-4b42-a988-73e2dd891b4d","Type":"ContainerStarted","Data":"c9bc94c5bcd6ad35c1ec1c0e3b884411e0ca329b9ab364d94fe810f04f88a158"} Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.075499 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.076918 4975 generic.go:334] "Generic (PLEG): container finished" podID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerID="e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59" exitCode=0 Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.076983 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6d6969d8-6mpvr" event={"ID":"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c","Type":"ContainerDied","Data":"e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59"} Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.084202 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5545f57d98-4rk8s" event={"ID":"5e5d1da3-0663-4a0d-984b-5e71e31a2e88","Type":"ContainerDied","Data":"67b68859f33d50442674dda07c26d271187b8398123118af889ef4cc966a6df9"} Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.084251 4975 scope.go:117] "RemoveContainer" containerID="87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc" Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.084405 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5545f57d98-4rk8s" Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.094040 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.094023821 podStartE2EDuration="3.094023821s" podCreationTimestamp="2026-01-22 10:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:08.093631881 +0000 UTC m=+1220.751667991" watchObservedRunningTime="2026-01-22 10:57:08.094023821 +0000 UTC m=+1220.752059931" Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.133529 4975 scope.go:117] "RemoveContainer" containerID="4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe" Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.144377 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5545f57d98-4rk8s"] Jan 22 10:57:08 crc kubenswrapper[4975]: I0122 10:57:08.150315 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5545f57d98-4rk8s"] Jan 22 10:57:09 crc kubenswrapper[4975]: I0122 10:57:09.658218 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 10:57:09 crc kubenswrapper[4975]: I0122 10:57:09.706811 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:09 crc kubenswrapper[4975]: I0122 10:57:09.767303 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" path="/var/lib/kubelet/pods/5e5d1da3-0663-4a0d-984b-5e71e31a2e88/volumes" Jan 22 10:57:09 crc kubenswrapper[4975]: I0122 10:57:09.800612 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:57:09 crc kubenswrapper[4975]: I0122 10:57:09.882310 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hb7cq"] Jan 22 10:57:09 crc kubenswrapper[4975]: I0122 10:57:09.888773 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerName="dnsmasq-dns" containerID="cri-o://0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629" gracePeriod=10 Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.077745 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bb9bdd9dd-tqvz2_a01d16cc-a78f-4c0c-9893-6739b85eb257/neutron-api/0.log" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.077820 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.350502 4975 generic.go:334] "Generic (PLEG): container finished" podID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerID="0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629" exitCode=0 Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.350607 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" event={"ID":"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b","Type":"ContainerDied","Data":"0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629"} Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.354980 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bb9bdd9dd-tqvz2_a01d16cc-a78f-4c0c-9893-6739b85eb257/neutron-api/0.log" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355024 4975 generic.go:334] "Generic (PLEG): container finished" podID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerID="debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f" exitCode=137 Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355093 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bb9bdd9dd-tqvz2" event={"ID":"a01d16cc-a78f-4c0c-9893-6739b85eb257","Type":"ContainerDied","Data":"debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f"} Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355163 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bb9bdd9dd-tqvz2" event={"ID":"a01d16cc-a78f-4c0c-9893-6739b85eb257","Type":"ContainerDied","Data":"7701914175bc730658af2601bd625f923cd48d249b39b7a379b2f153dae39600"} Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355188 4975 scope.go:117] "RemoveContainer" containerID="b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355113 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bb9bdd9dd-tqvz2" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355353 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="probe" containerID="cri-o://32a69262dffb789e16ae2fee8dec0f65f794c1d355ca7310f167196d750699b7" gracePeriod=30 Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.355208 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="cinder-scheduler" containerID="cri-o://8cfe09292b3fd1e15ecfa911978c435900f06cb261e56178529502cd65a47887" gracePeriod=30 Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.389610 4975 scope.go:117] "RemoveContainer" containerID="debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.416702 4975 scope.go:117] "RemoveContainer" containerID="b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347" Jan 22 10:57:10 crc kubenswrapper[4975]: E0122 10:57:10.417167 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347\": container with ID starting with b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347 not found: ID does not exist" containerID="b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.417197 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347"} err="failed to get container status \"b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347\": rpc error: code = NotFound desc = could not find container \"b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347\": container with ID starting with b83ceda00d1ad9eea4d6b73f14d01a9f625a1ba3484c9957e5eb79bfd96a8347 not found: ID does not exist" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.417219 4975 scope.go:117] "RemoveContainer" containerID="debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f" Jan 22 10:57:10 crc kubenswrapper[4975]: E0122 10:57:10.417567 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f\": container with ID starting with debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f not found: ID does not exist" containerID="debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.417621 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f"} err="failed to get container status \"debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f\": rpc error: code = NotFound desc = could not find container \"debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f\": container with ID starting with debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f not found: ID does not exist" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.440563 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-combined-ca-bundle\") pod \"a01d16cc-a78f-4c0c-9893-6739b85eb257\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.440734 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-httpd-config\") pod \"a01d16cc-a78f-4c0c-9893-6739b85eb257\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.440767 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl2k2\" (UniqueName: \"kubernetes.io/projected/a01d16cc-a78f-4c0c-9893-6739b85eb257-kube-api-access-jl2k2\") pod \"a01d16cc-a78f-4c0c-9893-6739b85eb257\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.440788 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-config\") pod \"a01d16cc-a78f-4c0c-9893-6739b85eb257\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.440804 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-ovndb-tls-certs\") pod \"a01d16cc-a78f-4c0c-9893-6739b85eb257\" (UID: \"a01d16cc-a78f-4c0c-9893-6739b85eb257\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.454872 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01d16cc-a78f-4c0c-9893-6739b85eb257-kube-api-access-jl2k2" (OuterVolumeSpecName: "kube-api-access-jl2k2") pod "a01d16cc-a78f-4c0c-9893-6739b85eb257" (UID: "a01d16cc-a78f-4c0c-9893-6739b85eb257"). InnerVolumeSpecName "kube-api-access-jl2k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.488693 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a01d16cc-a78f-4c0c-9893-6739b85eb257" (UID: "a01d16cc-a78f-4c0c-9893-6739b85eb257"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.538933 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-config" (OuterVolumeSpecName: "config") pod "a01d16cc-a78f-4c0c-9893-6739b85eb257" (UID: "a01d16cc-a78f-4c0c-9893-6739b85eb257"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.543301 4975 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.543354 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl2k2\" (UniqueName: \"kubernetes.io/projected/a01d16cc-a78f-4c0c-9893-6739b85eb257-kube-api-access-jl2k2\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.543366 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.548447 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a01d16cc-a78f-4c0c-9893-6739b85eb257" (UID: "a01d16cc-a78f-4c0c-9893-6739b85eb257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.591494 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a01d16cc-a78f-4c0c-9893-6739b85eb257" (UID: "a01d16cc-a78f-4c0c-9893-6739b85eb257"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.645252 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.645289 4975 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a01d16cc-a78f-4c0c-9893-6739b85eb257-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.752792 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bb9bdd9dd-tqvz2"] Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.753480 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.771176 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bb9bdd9dd-tqvz2"] Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.952718 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l72nm\" (UniqueName: \"kubernetes.io/projected/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-kube-api-access-l72nm\") pod \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.952786 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-svc\") pod \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.952912 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-sb\") pod \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.953106 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-swift-storage-0\") pod \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.953163 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-config\") pod \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.953256 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-nb\") pod \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\" (UID: \"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b\") " Jan 22 10:57:10 crc kubenswrapper[4975]: I0122 10:57:10.962028 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-kube-api-access-l72nm" (OuterVolumeSpecName: "kube-api-access-l72nm") pod "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" (UID: "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b"). InnerVolumeSpecName "kube-api-access-l72nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.009043 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-config" (OuterVolumeSpecName: "config") pod "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" (UID: "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.023300 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" (UID: "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.023400 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" (UID: "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.027996 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" (UID: "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.037765 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" (UID: "a6b17edc-f1f3-4b73-b626-4cd1d7d4183b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.056359 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.056394 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.056406 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.056415 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l72nm\" (UniqueName: \"kubernetes.io/projected/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-kube-api-access-l72nm\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.056424 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.056434 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.367014 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.367004 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hb7cq" event={"ID":"a6b17edc-f1f3-4b73-b626-4cd1d7d4183b","Type":"ContainerDied","Data":"10160ce8df405b1cc4e80cca212a7f0369413455a34215d9e1f589527e32eb53"} Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.367160 4975 scope.go:117] "RemoveContainer" containerID="0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.373389 4975 generic.go:334] "Generic (PLEG): container finished" podID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerID="32a69262dffb789e16ae2fee8dec0f65f794c1d355ca7310f167196d750699b7" exitCode=0 Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.373428 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a","Type":"ContainerDied","Data":"32a69262dffb789e16ae2fee8dec0f65f794c1d355ca7310f167196d750699b7"} Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.390277 4975 scope.go:117] "RemoveContainer" containerID="aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.412045 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hb7cq"] Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.436129 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hb7cq"] Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.763693 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" path="/var/lib/kubelet/pods/a01d16cc-a78f-4c0c-9893-6739b85eb257/volumes" Jan 22 10:57:11 crc kubenswrapper[4975]: I0122 10:57:11.764513 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" path="/var/lib/kubelet/pods/a6b17edc-f1f3-4b73-b626-4cd1d7d4183b/volumes" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.286767 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.287631 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-api" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.287767 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-api" Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.287858 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api-log" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.287938 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api-log" Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.288017 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerName="init" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.288084 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerName="init" Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.288170 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerName="dnsmasq-dns" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.288246 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerName="dnsmasq-dns" Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.288352 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-httpd" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.288430 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-httpd" Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.288505 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.288572 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.288865 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.288947 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-httpd" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.289036 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01d16cc-a78f-4c0c-9893-6739b85eb257" containerName="neutron-api" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.289115 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5d1da3-0663-4a0d-984b-5e71e31a2e88" containerName="barbican-api-log" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.289188 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b17edc-f1f3-4b73-b626-4cd1d7d4183b" containerName="dnsmasq-dns" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.290078 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.292209 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.292463 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-r5bs2" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.292520 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.296783 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.486431 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.486538 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.486572 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/996927d1-5796-4ab6-8173-2507683917a0-openstack-config\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.486589 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plfvp\" (UniqueName: \"kubernetes.io/projected/996927d1-5796-4ab6-8173-2507683917a0-kube-api-access-plfvp\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.588174 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.588312 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.588394 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/996927d1-5796-4ab6-8173-2507683917a0-openstack-config\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.588424 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plfvp\" (UniqueName: \"kubernetes.io/projected/996927d1-5796-4ab6-8173-2507683917a0-kube-api-access-plfvp\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.589164 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/996927d1-5796-4ab6-8173-2507683917a0-openstack-config\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.605920 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.607881 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plfvp\" (UniqueName: \"kubernetes.io/projected/996927d1-5796-4ab6-8173-2507683917a0-kube-api-access-plfvp\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.607996 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.609437 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.610057 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.644463 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.669405 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.670677 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.689975 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.724183 4975 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 22 10:57:12 crc kubenswrapper[4975]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_996927d1-5796-4ab6-8173-2507683917a0_0(779b6c497efe422d4a5958b836703beacace0c233c64ade328c75cb64affa380): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"779b6c497efe422d4a5958b836703beacace0c233c64ade328c75cb64affa380" Netns:"/var/run/netns/c56f1f54-c42f-47fb-aa52-b2fab41d3f89" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=779b6c497efe422d4a5958b836703beacace0c233c64ade328c75cb64affa380;K8S_POD_UID=996927d1-5796-4ab6-8173-2507683917a0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/996927d1-5796-4ab6-8173-2507683917a0]: expected pod UID "996927d1-5796-4ab6-8173-2507683917a0" but got "1217b84a-f2e0-49f9-a237-30f9493f1ca4" from Kube API Jan 22 10:57:12 crc kubenswrapper[4975]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:57:12 crc kubenswrapper[4975]: > Jan 22 10:57:12 crc kubenswrapper[4975]: E0122 10:57:12.724252 4975 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 22 10:57:12 crc kubenswrapper[4975]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_996927d1-5796-4ab6-8173-2507683917a0_0(779b6c497efe422d4a5958b836703beacace0c233c64ade328c75cb64affa380): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"779b6c497efe422d4a5958b836703beacace0c233c64ade328c75cb64affa380" Netns:"/var/run/netns/c56f1f54-c42f-47fb-aa52-b2fab41d3f89" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=779b6c497efe422d4a5958b836703beacace0c233c64ade328c75cb64affa380;K8S_POD_UID=996927d1-5796-4ab6-8173-2507683917a0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/996927d1-5796-4ab6-8173-2507683917a0]: expected pod UID "996927d1-5796-4ab6-8173-2507683917a0" but got "1217b84a-f2e0-49f9-a237-30f9493f1ca4" from Kube API Jan 22 10:57:12 crc kubenswrapper[4975]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:57:12 crc kubenswrapper[4975]: > pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.792582 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1217b84a-f2e0-49f9-a237-30f9493f1ca4-openstack-config-secret\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.792750 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1217b84a-f2e0-49f9-a237-30f9493f1ca4-openstack-config\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.792782 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1217b84a-f2e0-49f9-a237-30f9493f1ca4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.792839 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99hz\" (UniqueName: \"kubernetes.io/projected/1217b84a-f2e0-49f9-a237-30f9493f1ca4-kube-api-access-d99hz\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.879029 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-78ffc5fbc9-46msq"] Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.880568 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.882868 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.882878 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.883706 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.894590 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1217b84a-f2e0-49f9-a237-30f9493f1ca4-openstack-config\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.894650 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1217b84a-f2e0-49f9-a237-30f9493f1ca4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.894714 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99hz\" (UniqueName: \"kubernetes.io/projected/1217b84a-f2e0-49f9-a237-30f9493f1ca4-kube-api-access-d99hz\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.894750 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1217b84a-f2e0-49f9-a237-30f9493f1ca4-openstack-config-secret\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.895590 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1217b84a-f2e0-49f9-a237-30f9493f1ca4-openstack-config\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.899732 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78ffc5fbc9-46msq"] Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.907538 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1217b84a-f2e0-49f9-a237-30f9493f1ca4-openstack-config-secret\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.916525 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99hz\" (UniqueName: \"kubernetes.io/projected/1217b84a-f2e0-49f9-a237-30f9493f1ca4-kube-api-access-d99hz\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.920556 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1217b84a-f2e0-49f9-a237-30f9493f1ca4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1217b84a-f2e0-49f9-a237-30f9493f1ca4\") " pod="openstack/openstackclient" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.996981 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-config-data\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997058 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-combined-ca-bundle\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997079 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f6b2db-2af6-4817-a5cc-9c04fe10070f-run-httpd\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997115 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91f6b2db-2af6-4817-a5cc-9c04fe10070f-etc-swift\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997132 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-internal-tls-certs\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997155 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f6b2db-2af6-4817-a5cc-9c04fe10070f-log-httpd\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997197 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-public-tls-certs\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:12 crc kubenswrapper[4975]: I0122 10:57:12.997279 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whlk\" (UniqueName: \"kubernetes.io/projected/91f6b2db-2af6-4817-a5cc-9c04fe10070f-kube-api-access-5whlk\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.066320 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.098858 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whlk\" (UniqueName: \"kubernetes.io/projected/91f6b2db-2af6-4817-a5cc-9c04fe10070f-kube-api-access-5whlk\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099010 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-config-data\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099065 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-combined-ca-bundle\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099096 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f6b2db-2af6-4817-a5cc-9c04fe10070f-run-httpd\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099125 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91f6b2db-2af6-4817-a5cc-9c04fe10070f-etc-swift\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099146 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-internal-tls-certs\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099178 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f6b2db-2af6-4817-a5cc-9c04fe10070f-log-httpd\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.099209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-public-tls-certs\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.100183 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f6b2db-2af6-4817-a5cc-9c04fe10070f-run-httpd\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.100412 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91f6b2db-2af6-4817-a5cc-9c04fe10070f-log-httpd\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.103553 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-internal-tls-certs\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.104048 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-config-data\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.104254 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-combined-ca-bundle\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.104836 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f6b2db-2af6-4817-a5cc-9c04fe10070f-public-tls-certs\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.115012 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91f6b2db-2af6-4817-a5cc-9c04fe10070f-etc-swift\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.120820 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whlk\" (UniqueName: \"kubernetes.io/projected/91f6b2db-2af6-4817-a5cc-9c04fe10070f-kube-api-access-5whlk\") pod \"swift-proxy-78ffc5fbc9-46msq\" (UID: \"91f6b2db-2af6-4817-a5cc-9c04fe10070f\") " pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.205959 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.394661 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.400085 4975 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="996927d1-5796-4ab6-8173-2507683917a0" podUID="1217b84a-f2e0-49f9-a237-30f9493f1ca4" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.415598 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.506425 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-openstack-config-secret\") pod \"996927d1-5796-4ab6-8173-2507683917a0\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.506479 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plfvp\" (UniqueName: \"kubernetes.io/projected/996927d1-5796-4ab6-8173-2507683917a0-kube-api-access-plfvp\") pod \"996927d1-5796-4ab6-8173-2507683917a0\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.506562 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/996927d1-5796-4ab6-8173-2507683917a0-openstack-config\") pod \"996927d1-5796-4ab6-8173-2507683917a0\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.506611 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-combined-ca-bundle\") pod \"996927d1-5796-4ab6-8173-2507683917a0\" (UID: \"996927d1-5796-4ab6-8173-2507683917a0\") " Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.507935 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996927d1-5796-4ab6-8173-2507683917a0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "996927d1-5796-4ab6-8173-2507683917a0" (UID: "996927d1-5796-4ab6-8173-2507683917a0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.511591 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "996927d1-5796-4ab6-8173-2507683917a0" (UID: "996927d1-5796-4ab6-8173-2507683917a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.511786 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "996927d1-5796-4ab6-8173-2507683917a0" (UID: "996927d1-5796-4ab6-8173-2507683917a0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.523536 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996927d1-5796-4ab6-8173-2507683917a0-kube-api-access-plfvp" (OuterVolumeSpecName: "kube-api-access-plfvp") pod "996927d1-5796-4ab6-8173-2507683917a0" (UID: "996927d1-5796-4ab6-8173-2507683917a0"). InnerVolumeSpecName "kube-api-access-plfvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.530993 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.597507 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78ffc5fbc9-46msq"] Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.608687 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.608713 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/996927d1-5796-4ab6-8173-2507683917a0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.608724 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plfvp\" (UniqueName: \"kubernetes.io/projected/996927d1-5796-4ab6-8173-2507683917a0-kube-api-access-plfvp\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.608733 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/996927d1-5796-4ab6-8173-2507683917a0-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:13 crc kubenswrapper[4975]: I0122 10:57:13.764036 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996927d1-5796-4ab6-8173-2507683917a0" path="/var/lib/kubelet/pods/996927d1-5796-4ab6-8173-2507683917a0/volumes" Jan 22 10:57:14 crc kubenswrapper[4975]: I0122 10:57:14.402587 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1217b84a-f2e0-49f9-a237-30f9493f1ca4","Type":"ContainerStarted","Data":"f818548ce10e9bafe27328d080e321700708a64a6a9f2fe1e6e46088091e8707"} Jan 22 10:57:14 crc kubenswrapper[4975]: I0122 10:57:14.403879 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78ffc5fbc9-46msq" event={"ID":"91f6b2db-2af6-4817-a5cc-9c04fe10070f","Type":"ContainerStarted","Data":"bd3c63151e6a8612e21c30bb1d8a41634e99e28bef5075c778e189d58681b988"} Jan 22 10:57:14 crc kubenswrapper[4975]: I0122 10:57:14.405961 4975 generic.go:334] "Generic (PLEG): container finished" podID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerID="8cfe09292b3fd1e15ecfa911978c435900f06cb261e56178529502cd65a47887" exitCode=0 Jan 22 10:57:14 crc kubenswrapper[4975]: I0122 10:57:14.406037 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a","Type":"ContainerDied","Data":"8cfe09292b3fd1e15ecfa911978c435900f06cb261e56178529502cd65a47887"} Jan 22 10:57:14 crc kubenswrapper[4975]: I0122 10:57:14.406047 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:57:14 crc kubenswrapper[4975]: I0122 10:57:14.413545 4975 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="996927d1-5796-4ab6-8173-2507683917a0" podUID="1217b84a-f2e0-49f9-a237-30f9493f1ca4" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.072609 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.077298 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-central-agent" containerID="cri-o://382e6447c53a2d9603ed67056271c7ef5de19fb331dfdff2fe14bd08ebfa8227" gracePeriod=30 Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.078886 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="sg-core" containerID="cri-o://24ce52effd2845187413e12425548153724069a77b1f2b64faa5c46c4dcfc156" gracePeriod=30 Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.079024 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="proxy-httpd" containerID="cri-o://1e78aa99b48f9a01e9fb62997007a695ced9c744c3b2704f07774e1fcd5ddf1a" gracePeriod=30 Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.079097 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-notification-agent" containerID="cri-o://a95cac686e5dac6786e775d42ad6fbc775ed63671230bacfec1bbcbb3e353df0" gracePeriod=30 Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.110018 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.164:3000/\": EOF" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.429443 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78ffc5fbc9-46msq" event={"ID":"91f6b2db-2af6-4817-a5cc-9c04fe10070f","Type":"ContainerStarted","Data":"84615825d24c39f8827f2e41387168c42eaac00a3aaf3d0a4719a0174558c093"} Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.432123 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.432136 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78ffc5fbc9-46msq" event={"ID":"91f6b2db-2af6-4817-a5cc-9c04fe10070f","Type":"ContainerStarted","Data":"904584faff97c6326d75783a5e970109c6c67f3b5b3c0b2ddcd44dcaf4b08ff7"} Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.432147 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.437318 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.437713 4975 generic.go:334] "Generic (PLEG): container finished" podID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerID="1e78aa99b48f9a01e9fb62997007a695ced9c744c3b2704f07774e1fcd5ddf1a" exitCode=0 Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.437729 4975 generic.go:334] "Generic (PLEG): container finished" podID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerID="24ce52effd2845187413e12425548153724069a77b1f2b64faa5c46c4dcfc156" exitCode=2 Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.437745 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerDied","Data":"1e78aa99b48f9a01e9fb62997007a695ced9c744c3b2704f07774e1fcd5ddf1a"} Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.437764 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerDied","Data":"24ce52effd2845187413e12425548153724069a77b1f2b64faa5c46c4dcfc156"} Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.458088 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-78ffc5fbc9-46msq" podStartSLOduration=3.45807337 podStartE2EDuration="3.45807337s" podCreationTimestamp="2026-01-22 10:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:15.455754163 +0000 UTC m=+1228.113790273" watchObservedRunningTime="2026-01-22 10:57:15.45807337 +0000 UTC m=+1228.116109480" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559063 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data-custom\") pod \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559374 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-etc-machine-id\") pod \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559396 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-scripts\") pod \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559428 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcbgz\" (UniqueName: \"kubernetes.io/projected/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-kube-api-access-rcbgz\") pod \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559504 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-combined-ca-bundle\") pod \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559523 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data\") pod \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\" (UID: \"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a\") " Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.559923 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" (UID: "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.564744 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-kube-api-access-rcbgz" (OuterVolumeSpecName: "kube-api-access-rcbgz") pod "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" (UID: "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a"). InnerVolumeSpecName "kube-api-access-rcbgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.568406 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" (UID: "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.570452 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-scripts" (OuterVolumeSpecName: "scripts") pod "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" (UID: "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.618429 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" (UID: "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.663004 4975 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.663210 4975 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.663280 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.663380 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcbgz\" (UniqueName: \"kubernetes.io/projected/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-kube-api-access-rcbgz\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.663467 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.686412 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data" (OuterVolumeSpecName: "config-data") pod "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" (UID: "ee8474ad-59d4-44d3-b7f1-e81925e6ef6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:15 crc kubenswrapper[4975]: I0122 10:57:15.764952 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.448342 4975 generic.go:334] "Generic (PLEG): container finished" podID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerID="382e6447c53a2d9603ed67056271c7ef5de19fb331dfdff2fe14bd08ebfa8227" exitCode=0 Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.448359 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerDied","Data":"382e6447c53a2d9603ed67056271c7ef5de19fb331dfdff2fe14bd08ebfa8227"} Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.451155 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee8474ad-59d4-44d3-b7f1-e81925e6ef6a","Type":"ContainerDied","Data":"cbc8c181ad6e765a3984297041db98faf29f27746f001b40dc30d3b49b666c2a"} Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.451195 4975 scope.go:117] "RemoveContainer" containerID="32a69262dffb789e16ae2fee8dec0f65f794c1d355ca7310f167196d750699b7" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.451170 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.483507 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.489894 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.494349 4975 scope.go:117] "RemoveContainer" containerID="8cfe09292b3fd1e15ecfa911978c435900f06cb261e56178529502cd65a47887" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.501481 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:16 crc kubenswrapper[4975]: E0122 10:57:16.501952 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="cinder-scheduler" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.501976 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="cinder-scheduler" Jan 22 10:57:16 crc kubenswrapper[4975]: E0122 10:57:16.502010 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="probe" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.502019 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="probe" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.502206 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="cinder-scheduler" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.502230 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" containerName="probe" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.503441 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.506302 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.512195 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.653858 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.680574 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.680617 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvw6v\" (UniqueName: \"kubernetes.io/projected/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-kube-api-access-fvw6v\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.680820 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.681076 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.681162 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.681220 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782556 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782642 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782680 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782716 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782741 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782758 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvw6v\" (UniqueName: \"kubernetes.io/projected/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-kube-api-access-fvw6v\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.782757 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.788320 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.789768 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.791171 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.794996 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.807514 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvw6v\" (UniqueName: \"kubernetes.io/projected/0be551d1-c8bf-440d-b0af-1b40cb1d7e8a-kube-api-access-fvw6v\") pod \"cinder-scheduler-0\" (UID: \"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a\") " pod="openstack/cinder-scheduler-0" Jan 22 10:57:16 crc kubenswrapper[4975]: I0122 10:57:16.830752 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.433212 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.492842 4975 generic.go:334] "Generic (PLEG): container finished" podID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerID="a95cac686e5dac6786e775d42ad6fbc775ed63671230bacfec1bbcbb3e353df0" exitCode=0 Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.492898 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerDied","Data":"a95cac686e5dac6786e775d42ad6fbc775ed63671230bacfec1bbcbb3e353df0"} Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.515658 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a","Type":"ContainerStarted","Data":"9767138b7a32c5516aed9d18beb27381a043150dc36ee4c692cadd5467068298"} Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.646890 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.770167 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8474ad-59d4-44d3-b7f1-e81925e6ef6a" path="/var/lib/kubelet/pods/ee8474ad-59d4-44d3-b7f1-e81925e6ef6a/volumes" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804405 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-config-data\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804527 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfvx6\" (UniqueName: \"kubernetes.io/projected/fb8f805f-609a-4a60-a09a-b4da8360c470-kube-api-access-zfvx6\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804575 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-sg-core-conf-yaml\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804635 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-run-httpd\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804721 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-combined-ca-bundle\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804789 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-log-httpd\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.804850 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-scripts\") pod \"fb8f805f-609a-4a60-a09a-b4da8360c470\" (UID: \"fb8f805f-609a-4a60-a09a-b4da8360c470\") " Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.805767 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.806929 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.813037 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-scripts" (OuterVolumeSpecName: "scripts") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.820559 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8f805f-609a-4a60-a09a-b4da8360c470-kube-api-access-zfvx6" (OuterVolumeSpecName: "kube-api-access-zfvx6") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "kube-api-access-zfvx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.837526 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.881022 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.884015 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.914942 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfvx6\" (UniqueName: \"kubernetes.io/projected/fb8f805f-609a-4a60-a09a-b4da8360c470-kube-api-access-zfvx6\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.921380 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.921439 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.921467 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.921479 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb8f805f-609a-4a60-a09a-b4da8360c470-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.921508 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:17 crc kubenswrapper[4975]: I0122 10:57:17.974102 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-config-data" (OuterVolumeSpecName: "config-data") pod "fb8f805f-609a-4a60-a09a-b4da8360c470" (UID: "fb8f805f-609a-4a60-a09a-b4da8360c470"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.023818 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8f805f-609a-4a60-a09a-b4da8360c470-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.541764 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a","Type":"ContainerStarted","Data":"2b2a0d3a2211adf10094652e20da3ea599208b8c74a4a88abc1cf0d77bd3d81f"} Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.554084 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb8f805f-609a-4a60-a09a-b4da8360c470","Type":"ContainerDied","Data":"977e454cfce5d639d79c511be053129cad12c5451c2d7def3d7821bf68c1b393"} Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.554425 4975 scope.go:117] "RemoveContainer" containerID="1e78aa99b48f9a01e9fb62997007a695ced9c744c3b2704f07774e1fcd5ddf1a" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.554518 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.595014 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.598939 4975 scope.go:117] "RemoveContainer" containerID="24ce52effd2845187413e12425548153724069a77b1f2b64faa5c46c4dcfc156" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.603602 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624130 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:18 crc kubenswrapper[4975]: E0122 10:57:18.624474 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-notification-agent" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624489 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-notification-agent" Jan 22 10:57:18 crc kubenswrapper[4975]: E0122 10:57:18.624509 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="sg-core" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624515 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="sg-core" Jan 22 10:57:18 crc kubenswrapper[4975]: E0122 10:57:18.624525 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-central-agent" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624534 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-central-agent" Jan 22 10:57:18 crc kubenswrapper[4975]: E0122 10:57:18.624547 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="proxy-httpd" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624700 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="proxy-httpd" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624860 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="sg-core" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624879 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-notification-agent" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624885 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="proxy-httpd" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.624897 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" containerName="ceilometer-central-agent" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.628456 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.631195 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.631419 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.645276 4975 scope.go:117] "RemoveContainer" containerID="a95cac686e5dac6786e775d42ad6fbc775ed63671230bacfec1bbcbb3e353df0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.646881 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.699966 4975 scope.go:117] "RemoveContainer" containerID="382e6447c53a2d9603ed67056271c7ef5de19fb331dfdff2fe14bd08ebfa8227" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.741771 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-log-httpd\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.741989 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.742036 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4km5k\" (UniqueName: \"kubernetes.io/projected/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-kube-api-access-4km5k\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.742130 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-config-data\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.742184 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-scripts\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.742308 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.742520 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-run-httpd\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844453 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-log-httpd\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844553 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844582 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4km5k\" (UniqueName: \"kubernetes.io/projected/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-kube-api-access-4km5k\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844612 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-config-data\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844627 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-scripts\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844642 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.844694 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-run-httpd\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.846473 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-log-httpd\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.846661 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-run-httpd\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.854471 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-scripts\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.856290 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.857715 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.865530 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4km5k\" (UniqueName: \"kubernetes.io/projected/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-kube-api-access-4km5k\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.865838 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-config-data\") pod \"ceilometer-0\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " pod="openstack/ceilometer-0" Jan 22 10:57:18 crc kubenswrapper[4975]: I0122 10:57:18.946822 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:19 crc kubenswrapper[4975]: I0122 10:57:19.427719 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:19 crc kubenswrapper[4975]: I0122 10:57:19.564579 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0be551d1-c8bf-440d-b0af-1b40cb1d7e8a","Type":"ContainerStarted","Data":"dec10d5311e19e058bc3ce83f14641a344ce275aa6ea4d3a1e539583866a6ef5"} Jan 22 10:57:19 crc kubenswrapper[4975]: I0122 10:57:19.568973 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerStarted","Data":"caa6e18c0d30dd5d5d8756a31071ef4234b0188c956d9e694187f9de0c94bf05"} Jan 22 10:57:19 crc kubenswrapper[4975]: I0122 10:57:19.586713 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.586690918 podStartE2EDuration="3.586690918s" podCreationTimestamp="2026-01-22 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:19.582530278 +0000 UTC m=+1232.240566388" watchObservedRunningTime="2026-01-22 10:57:19.586690918 +0000 UTC m=+1232.244727028" Jan 22 10:57:19 crc kubenswrapper[4975]: I0122 10:57:19.770502 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8f805f-609a-4a60-a09a-b4da8360c470" path="/var/lib/kubelet/pods/fb8f805f-609a-4a60-a09a-b4da8360c470/volumes" Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.510649 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40.scope WatchSource:0}: Error finding container ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40: Status 404 returned error can't find the container with id ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40 Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.515484 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice/crio-conmon-0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice/crio-conmon-0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629.scope: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.515544 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5d1da3_0663_4a0d_984b_5e71e31a2e88.slice/crio-conmon-87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5d1da3_0663_4a0d_984b_5e71e31a2e88.slice/crio-conmon-87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc.scope: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.515565 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5d1da3_0663_4a0d_984b_5e71e31a2e88.slice/crio-87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5d1da3_0663_4a0d_984b_5e71e31a2e88.slice/crio-87cb7fef6a5827783005c9df13ac3de0b70787e88659a55fd37ce5c9da9dc9dc.scope: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.515583 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice/crio-0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice/crio-0c5faab4484ca63244181b21b6ed28fba0fbd5c471e2edcfedef5ea303abc629.scope: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.543972 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice/crio-aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810.scope WatchSource:0}: Error finding container aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810: Status 404 returned error can't find the container with id aefa34b03211c89e7b3d5ace2b7761699c02eab2895683367df28e26c9fe9810 Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.544257 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5d1da3_0663_4a0d_984b_5e71e31a2e88.slice/crio-4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe.scope WatchSource:0}: Error finding container 4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe: Status 404 returned error can't find the container with id 4fe04ab208bfce1faac1d7526336eb52ca2e1abcb9d7392242a58d72bafbabbe Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.557593 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb8f805f_609a_4a60_a09a_b4da8360c470.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb8f805f_609a_4a60_a09a_b4da8360c470.slice: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.593461 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee8474ad_59d4_44d3_b7f1_e81925e6ef6a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee8474ad_59d4_44d3_b7f1_e81925e6ef6a.slice: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.606270 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9303a4bb_59d1_4a47_afe3_0275a2334e9e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9303a4bb_59d1_4a47_afe3_0275a2334e9e.slice: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: W0122 10:57:20.624648 4975 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod996927d1_5796_4ab6_8173_2507683917a0.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod996927d1_5796_4ab6_8173_2507683917a0.slice: no such file or directory Jan 22 10:57:20 crc kubenswrapper[4975]: I0122 10:57:20.652443 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerStarted","Data":"c18b802c3ac7ebe5ace13084c6c8e9a95ec33078daa0371abd36747966059f06"} Jan 22 10:57:20 crc kubenswrapper[4975]: I0122 10:57:20.684196 4975 generic.go:334] "Generic (PLEG): container finished" podID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerID="170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289" exitCode=0 Jan 22 10:57:20 crc kubenswrapper[4975]: I0122 10:57:20.685303 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6d6969d8-6mpvr" event={"ID":"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c","Type":"ContainerDied","Data":"170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289"} Jan 22 10:57:20 crc kubenswrapper[4975]: E0122 10:57:20.928119 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-48d4baefe552b0defdaed3653129918a76b3853e23c8fc1a098d7ec2f3f70582\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf58e68a6_dee2_49a4_be2d_e6b1ca69b19f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc5d94b3_48d8_485a_b307_9a011d77ee76.slice/crio-e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a762d84_4b11_4b4b_91e1_51e43be6ff86.slice/crio-conmon-2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5d1da3_0663_4a0d_984b_5e71e31a2e88.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8e0f17_b39b_4c05_abde_d1d73dd7b20c.slice/crio-conmon-170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-conmon-ad2fcbbcefbe697abb7e0649a97516a04a271b075eeecbed86b3c89d6cfb4b40.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-conmon-543109d62de75256de6b1c91fa7bb65cd4afe711b62344b29fdba190a8dbb336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf58e68a6_dee2_49a4_be2d_e6b1ca69b19f.slice/crio-b9142cac76f6745609a714d1e015b5c611624dc75b88fc27b0bbb6b42d0ce9bd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a762d84_4b11_4b4b_91e1_51e43be6ff86.slice/crio-c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc5d94b3_48d8_485a_b307_9a011d77ee76.slice/crio-conmon-e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8e0f17_b39b_4c05_abde_d1d73dd7b20c.slice/crio-conmon-e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40dba24f_eb65_4337_b3e8_6498018ae0cd.slice/crio-conmon-15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a762d84_4b11_4b4b_91e1_51e43be6ff86.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40dba24f_eb65_4337_b3e8_6498018ae0cd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-31cc176940fe46b1432897b44bacbec1ed7c774a2cba5f1a87f033e055d285de.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40dba24f_eb65_4337_b3e8_6498018ae0cd.slice/crio-15e7c4b69e46d99bacb129c5b2a87d4137660a6c61cddaccbcb70405beee4075.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8e0f17_b39b_4c05_abde_d1d73dd7b20c.slice/crio-170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01d16cc_a78f_4c0c_9893_6739b85eb257.slice/crio-debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8e0f17_b39b_4c05_abde_d1d73dd7b20c.slice/crio-e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod090fc6ba_2ecd_4dbd_8e2c_86112df32aef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b17edc_f1f3_4b73_b626_4cd1d7d4183b.slice/crio-10160ce8df405b1cc4e80cca212a7f0369413455a34215d9e1f589527e32eb53\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod090fc6ba_2ecd_4dbd_8e2c_86112df32aef.slice/crio-9e0ea78df671e36810356f60b5d6b014fa61dd78c9fbd1e26c9e3f554f2b4509\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a762d84_4b11_4b4b_91e1_51e43be6ff86.slice/crio-2b45d7dfadba34c596c00594f01ac3de654047b31c862bb2c7229f2714c2d0b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a762d84_4b11_4b4b_91e1_51e43be6ff86.slice/crio-b56162b1c92adaaf0b0fdd13e949146b5b7ebfd3a9f5d67d0664a4a71eff7b5d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40dba24f_eb65_4337_b3e8_6498018ae0cd.slice/crio-03bf6c3c89deb9815bd23aea1ad006734d2417ca2f3ed74399836dec59563aa4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a762d84_4b11_4b4b_91e1_51e43be6ff86.slice/crio-conmon-c9583a34654a58d7f99b0e5bb92a5cb39c0394a934a5d57c1436f7ce36f74aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01d16cc_a78f_4c0c_9893_6739b85eb257.slice/crio-conmon-debb61be51b27d98989279bca844c88c62f073c8db690eb4fff7a9270b83709f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420d2506_62c2_4217_bab2_0ea643c9d8cb.slice/crio-conmon-dced2c42008467b6f0308ac514e7a011680e025796c881283c39ec5cb0b4bd85.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.154035 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.300571 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-config\") pod \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.300879 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kstg8\" (UniqueName: \"kubernetes.io/projected/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-kube-api-access-kstg8\") pod \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.300939 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-combined-ca-bundle\") pod \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.300992 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-httpd-config\") pod \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.301019 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-ovndb-tls-certs\") pod \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\" (UID: \"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c\") " Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.307604 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-kube-api-access-kstg8" (OuterVolumeSpecName: "kube-api-access-kstg8") pod "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" (UID: "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c"). InnerVolumeSpecName "kube-api-access-kstg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.309447 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" (UID: "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.364367 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" (UID: "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.366311 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-config" (OuterVolumeSpecName: "config") pod "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" (UID: "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.384405 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" (UID: "9c8e0f17-b39b-4c05-abde-d1d73dd7b20c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.403120 4975 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.403151 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.403162 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kstg8\" (UniqueName: \"kubernetes.io/projected/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-kube-api-access-kstg8\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.403172 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.403182 4975 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.694270 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerStarted","Data":"2516b1d460893e626d58c868431ebe8c161ffae1b2ab67c3a4ad4a81f980cf2e"} Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.697245 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6d6969d8-6mpvr" event={"ID":"9c8e0f17-b39b-4c05-abde-d1d73dd7b20c","Type":"ContainerDied","Data":"936d6a6471ad562a0fc2b825a39b585fa64f4f2568c09cafb8a57751e4faf8e5"} Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.697280 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6d6969d8-6mpvr" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.697319 4975 scope.go:117] "RemoveContainer" containerID="e18c920c31b082cbd365a6ee627abb84120e29d97369f76ac80c364344894f59" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.734098 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d6d6969d8-6mpvr"] Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.738306 4975 scope.go:117] "RemoveContainer" containerID="170be906508f324bc03f9f51c81d740af00020c9ee1ec02a921092d973426289" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.747263 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d6d6969d8-6mpvr"] Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.767526 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" path="/var/lib/kubelet/pods/9c8e0f17-b39b-4c05-abde-d1d73dd7b20c/volumes" Jan 22 10:57:21 crc kubenswrapper[4975]: I0122 10:57:21.833170 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 10:57:22 crc kubenswrapper[4975]: I0122 10:57:22.712017 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerStarted","Data":"3c8505b97c8a3ff6e5b9f205fb50a36c2615fe4f9faa1d5b3a5c3341a04dfb94"} Jan 22 10:57:23 crc kubenswrapper[4975]: I0122 10:57:23.213916 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:23 crc kubenswrapper[4975]: I0122 10:57:23.216661 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78ffc5fbc9-46msq" Jan 22 10:57:26 crc kubenswrapper[4975]: I0122 10:57:26.658596 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bbdd45bdd-ckdgl" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 10:57:26 crc kubenswrapper[4975]: I0122 10:57:26.659126 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:57:26 crc kubenswrapper[4975]: I0122 10:57:26.906682 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:27 crc kubenswrapper[4975]: I0122 10:57:27.058931 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.845574 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerStarted","Data":"c93f97dc12efb1a56d38367a626078aba24b0f4f723b29aa0e16b2345dcf94e7"} Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.848637 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-central-agent" containerID="cri-o://c18b802c3ac7ebe5ace13084c6c8e9a95ec33078daa0371abd36747966059f06" gracePeriod=30 Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.849170 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.850924 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="proxy-httpd" containerID="cri-o://c93f97dc12efb1a56d38367a626078aba24b0f4f723b29aa0e16b2345dcf94e7" gracePeriod=30 Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.851120 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="sg-core" containerID="cri-o://3c8505b97c8a3ff6e5b9f205fb50a36c2615fe4f9faa1d5b3a5c3341a04dfb94" gracePeriod=30 Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.851265 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-notification-agent" containerID="cri-o://2516b1d460893e626d58c868431ebe8c161ffae1b2ab67c3a4ad4a81f980cf2e" gracePeriod=30 Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.857554 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1217b84a-f2e0-49f9-a237-30f9493f1ca4","Type":"ContainerStarted","Data":"733283fa52ff11cafd59a3e8944b40dfff198c65679a7a11f42a085661f148ba"} Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.872673 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.111180453 podStartE2EDuration="13.872652538s" podCreationTimestamp="2026-01-22 10:57:18 +0000 UTC" firstStartedPulling="2026-01-22 10:57:19.456351845 +0000 UTC m=+1232.114387955" lastFinishedPulling="2026-01-22 10:57:31.21782393 +0000 UTC m=+1243.875860040" observedRunningTime="2026-01-22 10:57:31.869625404 +0000 UTC m=+1244.527661524" watchObservedRunningTime="2026-01-22 10:57:31.872652538 +0000 UTC m=+1244.530688658" Jan 22 10:57:31 crc kubenswrapper[4975]: I0122 10:57:31.891733 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.218026372 podStartE2EDuration="19.891715398s" podCreationTimestamp="2026-01-22 10:57:12 +0000 UTC" firstStartedPulling="2026-01-22 10:57:13.542970482 +0000 UTC m=+1226.201006592" lastFinishedPulling="2026-01-22 10:57:31.216659508 +0000 UTC m=+1243.874695618" observedRunningTime="2026-01-22 10:57:31.886679828 +0000 UTC m=+1244.544715958" watchObservedRunningTime="2026-01-22 10:57:31.891715398 +0000 UTC m=+1244.549751528" Jan 22 10:57:32 crc kubenswrapper[4975]: I0122 10:57:32.870356 4975 generic.go:334] "Generic (PLEG): container finished" podID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerID="c93f97dc12efb1a56d38367a626078aba24b0f4f723b29aa0e16b2345dcf94e7" exitCode=0 Jan 22 10:57:32 crc kubenswrapper[4975]: I0122 10:57:32.870693 4975 generic.go:334] "Generic (PLEG): container finished" podID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerID="3c8505b97c8a3ff6e5b9f205fb50a36c2615fe4f9faa1d5b3a5c3341a04dfb94" exitCode=2 Jan 22 10:57:32 crc kubenswrapper[4975]: I0122 10:57:32.870447 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerDied","Data":"c93f97dc12efb1a56d38367a626078aba24b0f4f723b29aa0e16b2345dcf94e7"} Jan 22 10:57:32 crc kubenswrapper[4975]: I0122 10:57:32.870769 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerDied","Data":"3c8505b97c8a3ff6e5b9f205fb50a36c2615fe4f9faa1d5b3a5c3341a04dfb94"} Jan 22 10:57:34 crc kubenswrapper[4975]: I0122 10:57:34.893737 4975 generic.go:334] "Generic (PLEG): container finished" podID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerID="2516b1d460893e626d58c868431ebe8c161ffae1b2ab67c3a4ad4a81f980cf2e" exitCode=0 Jan 22 10:57:34 crc kubenswrapper[4975]: I0122 10:57:34.894017 4975 generic.go:334] "Generic (PLEG): container finished" podID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerID="c18b802c3ac7ebe5ace13084c6c8e9a95ec33078daa0371abd36747966059f06" exitCode=0 Jan 22 10:57:34 crc kubenswrapper[4975]: I0122 10:57:34.893792 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerDied","Data":"2516b1d460893e626d58c868431ebe8c161ffae1b2ab67c3a4ad4a81f980cf2e"} Jan 22 10:57:34 crc kubenswrapper[4975]: I0122 10:57:34.894074 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerDied","Data":"c18b802c3ac7ebe5ace13084c6c8e9a95ec33078daa0371abd36747966059f06"} Jan 22 10:57:34 crc kubenswrapper[4975]: I0122 10:57:34.896299 4975 generic.go:334] "Generic (PLEG): container finished" podID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerID="39b1484aa175e8832a4a3a02dba23da796deff516ab271d29c22c353de6d296d" exitCode=137 Jan 22 10:57:34 crc kubenswrapper[4975]: I0122 10:57:34.896363 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbdd45bdd-ckdgl" event={"ID":"fc5d94b3-48d8-485a-b307-9a011d77ee76","Type":"ContainerDied","Data":"39b1484aa175e8832a4a3a02dba23da796deff516ab271d29c22c353de6d296d"} Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.421954 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.431596 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468391 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-combined-ca-bundle\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468445 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-scripts\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468502 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5d94b3-48d8-485a-b307-9a011d77ee76-logs\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468561 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-sg-core-conf-yaml\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468580 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-log-httpd\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468599 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvdz\" (UniqueName: \"kubernetes.io/projected/fc5d94b3-48d8-485a-b307-9a011d77ee76-kube-api-access-hlvdz\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468620 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-combined-ca-bundle\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468652 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4km5k\" (UniqueName: \"kubernetes.io/projected/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-kube-api-access-4km5k\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468669 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-secret-key\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468685 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-run-httpd\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468702 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-config-data\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468730 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-config-data\") pod \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\" (UID: \"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468799 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-scripts\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.468842 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-tls-certs\") pod \"fc5d94b3-48d8-485a-b307-9a011d77ee76\" (UID: \"fc5d94b3-48d8-485a-b307-9a011d77ee76\") " Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.469547 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.470667 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5d94b3-48d8-485a-b307-9a011d77ee76-logs" (OuterVolumeSpecName: "logs") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.471074 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.478293 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-scripts" (OuterVolumeSpecName: "scripts") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.479094 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.483654 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-kube-api-access-4km5k" (OuterVolumeSpecName: "kube-api-access-4km5k") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "kube-api-access-4km5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.502092 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5d94b3-48d8-485a-b307-9a011d77ee76-kube-api-access-hlvdz" (OuterVolumeSpecName: "kube-api-access-hlvdz") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "kube-api-access-hlvdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.520411 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.530425 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-config-data" (OuterVolumeSpecName: "config-data") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.537124 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.546242 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-scripts" (OuterVolumeSpecName: "scripts") pod "fc5d94b3-48d8-485a-b307-9a011d77ee76" (UID: "fc5d94b3-48d8-485a-b307-9a011d77ee76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.549568 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570438 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570616 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5d94b3-48d8-485a-b307-9a011d77ee76-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570629 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570640 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570655 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlvdz\" (UniqueName: \"kubernetes.io/projected/fc5d94b3-48d8-485a-b307-9a011d77ee76-kube-api-access-hlvdz\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570668 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570678 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4km5k\" (UniqueName: \"kubernetes.io/projected/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-kube-api-access-4km5k\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570689 4975 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570700 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570710 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570721 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc5d94b3-48d8-485a-b307-9a011d77ee76-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.570731 4975 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc5d94b3-48d8-485a-b307-9a011d77ee76-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.592738 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.592748 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-config-data" (OuterVolumeSpecName: "config-data") pod "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" (UID: "be1a1a61-c32e-405b-b6e3-1291ae4ff8d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.673461 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.673487 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.905628 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbdd45bdd-ckdgl" event={"ID":"fc5d94b3-48d8-485a-b307-9a011d77ee76","Type":"ContainerDied","Data":"7d705752375c02cce9a9eaa6ac773f1e737ce3d67c65850e8a3af5d224339ec5"} Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.905671 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbdd45bdd-ckdgl" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.905693 4975 scope.go:117] "RemoveContainer" containerID="e133221d6ac1001025ac6a494021a555d0ffd31b1f0f89a766296818d86b5169" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.913904 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be1a1a61-c32e-405b-b6e3-1291ae4ff8d5","Type":"ContainerDied","Data":"caa6e18c0d30dd5d5d8756a31071ef4234b0188c956d9e694187f9de0c94bf05"} Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.914019 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.936739 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bbdd45bdd-ckdgl"] Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.943914 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bbdd45bdd-ckdgl"] Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.960550 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:35 crc kubenswrapper[4975]: I0122 10:57:35.960604 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.014624 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015052 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-api" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015066 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-api" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015088 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="sg-core" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015095 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="sg-core" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015116 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015123 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015135 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-httpd" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015142 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-httpd" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015160 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-notification-agent" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015168 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-notification-agent" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015185 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon-log" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015192 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon-log" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015208 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="proxy-httpd" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015214 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="proxy-httpd" Jan 22 10:57:36 crc kubenswrapper[4975]: E0122 10:57:36.015224 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-central-agent" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015232 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-central-agent" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015448 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="sg-core" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015468 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015477 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-httpd" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015493 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" containerName="horizon-log" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015515 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-notification-agent" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015524 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="proxy-httpd" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015531 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8e0f17-b39b-4c05-abde-d1d73dd7b20c" containerName="neutron-api" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.015543 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" containerName="ceilometer-central-agent" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.017699 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.019636 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.019818 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.023512 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.066794 4975 scope.go:117] "RemoveContainer" containerID="39b1484aa175e8832a4a3a02dba23da796deff516ab271d29c22c353de6d296d" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.092050 4975 scope.go:117] "RemoveContainer" containerID="c93f97dc12efb1a56d38367a626078aba24b0f4f723b29aa0e16b2345dcf94e7" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.106908 4975 scope.go:117] "RemoveContainer" containerID="3c8505b97c8a3ff6e5b9f205fb50a36c2615fe4f9faa1d5b3a5c3341a04dfb94" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.124841 4975 scope.go:117] "RemoveContainer" containerID="2516b1d460893e626d58c868431ebe8c161ffae1b2ab67c3a4ad4a81f980cf2e" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.142742 4975 scope.go:117] "RemoveContainer" containerID="c18b802c3ac7ebe5ace13084c6c8e9a95ec33078daa0371abd36747966059f06" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.185213 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-config-data\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.185302 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.185962 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.186051 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-run-httpd\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.186174 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fj9s\" (UniqueName: \"kubernetes.io/projected/e622e272-9a72-488c-a99b-cc292d65b633-kube-api-access-2fj9s\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.186380 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-scripts\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.186522 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-log-httpd\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.288747 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289043 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-run-httpd\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289094 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fj9s\" (UniqueName: \"kubernetes.io/projected/e622e272-9a72-488c-a99b-cc292d65b633-kube-api-access-2fj9s\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289164 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-scripts\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-log-httpd\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289308 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-config-data\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289350 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289785 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-run-httpd\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.289786 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-log-httpd\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.293308 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-config-data\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.293894 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.294021 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-scripts\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.294301 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.311129 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fj9s\" (UniqueName: \"kubernetes.io/projected/e622e272-9a72-488c-a99b-cc292d65b633-kube-api-access-2fj9s\") pod \"ceilometer-0\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.346129 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.842788 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:36 crc kubenswrapper[4975]: I0122 10:57:36.925786 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerStarted","Data":"87dfea9691e5c2c24800c25dcd31d5d55d30d389d87f204c207d4e9b2bd5c5d8"} Jan 22 10:57:37 crc kubenswrapper[4975]: I0122 10:57:37.204574 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:37 crc kubenswrapper[4975]: I0122 10:57:37.769131 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1a1a61-c32e-405b-b6e3-1291ae4ff8d5" path="/var/lib/kubelet/pods/be1a1a61-c32e-405b-b6e3-1291ae4ff8d5/volumes" Jan 22 10:57:37 crc kubenswrapper[4975]: I0122 10:57:37.770079 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5d94b3-48d8-485a-b307-9a011d77ee76" path="/var/lib/kubelet/pods/fc5d94b3-48d8-485a-b307-9a011d77ee76/volumes" Jan 22 10:57:37 crc kubenswrapper[4975]: I0122 10:57:37.939574 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerStarted","Data":"a56cc525975f1020bba3ddc1eb5e4efc57c03a15690d0c54a727679acb46b609"} Jan 22 10:57:38 crc kubenswrapper[4975]: I0122 10:57:38.949491 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerStarted","Data":"98732f990566e868b063928455c3afb57aee86317836b5522d178dd70e91c889"} Jan 22 10:57:38 crc kubenswrapper[4975]: I0122 10:57:38.950078 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerStarted","Data":"ddedd5eb868748c98f7dfaefa9e1aa5981616bd6bfbc24ca64335968eaefd43f"} Jan 22 10:57:39 crc kubenswrapper[4975]: I0122 10:57:39.510416 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:57:39 crc kubenswrapper[4975]: I0122 10:57:39.510754 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-log" containerID="cri-o://49adfbb458fcdae3bdc711778acedac8940b15179524bb30862fa2112098c145" gracePeriod=30 Jan 22 10:57:39 crc kubenswrapper[4975]: I0122 10:57:39.510918 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-httpd" containerID="cri-o://3313ee02c50a816d3513f2cf5bb15cd21c422a33ae76ce77a75d5636bf912605" gracePeriod=30 Jan 22 10:57:39 crc kubenswrapper[4975]: I0122 10:57:39.958367 4975 generic.go:334] "Generic (PLEG): container finished" podID="c2656030-bade-452d-a4cb-ba4c12935750" containerID="49adfbb458fcdae3bdc711778acedac8940b15179524bb30862fa2112098c145" exitCode=143 Jan 22 10:57:39 crc kubenswrapper[4975]: I0122 10:57:39.958444 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c2656030-bade-452d-a4cb-ba4c12935750","Type":"ContainerDied","Data":"49adfbb458fcdae3bdc711778acedac8940b15179524bb30862fa2112098c145"} Jan 22 10:57:40 crc kubenswrapper[4975]: I0122 10:57:40.976162 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerStarted","Data":"e1fd0446a6ed34c3f57892f12712fc0c51c5f68d5dd83a4251b5b15637467b43"} Jan 22 10:57:40 crc kubenswrapper[4975]: I0122 10:57:40.976694 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-central-agent" containerID="cri-o://a56cc525975f1020bba3ddc1eb5e4efc57c03a15690d0c54a727679acb46b609" gracePeriod=30 Jan 22 10:57:40 crc kubenswrapper[4975]: I0122 10:57:40.977012 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:57:40 crc kubenswrapper[4975]: I0122 10:57:40.977296 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="proxy-httpd" containerID="cri-o://e1fd0446a6ed34c3f57892f12712fc0c51c5f68d5dd83a4251b5b15637467b43" gracePeriod=30 Jan 22 10:57:40 crc kubenswrapper[4975]: I0122 10:57:40.977378 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="sg-core" containerID="cri-o://98732f990566e868b063928455c3afb57aee86317836b5522d178dd70e91c889" gracePeriod=30 Jan 22 10:57:40 crc kubenswrapper[4975]: I0122 10:57:40.977421 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-notification-agent" containerID="cri-o://ddedd5eb868748c98f7dfaefa9e1aa5981616bd6bfbc24ca64335968eaefd43f" gracePeriod=30 Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.012499 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.85957113 podStartE2EDuration="6.012484437s" podCreationTimestamp="2026-01-22 10:57:35 +0000 UTC" firstStartedPulling="2026-01-22 10:57:36.835614625 +0000 UTC m=+1249.493650745" lastFinishedPulling="2026-01-22 10:57:39.988527942 +0000 UTC m=+1252.646564052" observedRunningTime="2026-01-22 10:57:41.00574616 +0000 UTC m=+1253.663782280" watchObservedRunningTime="2026-01-22 10:57:41.012484437 +0000 UTC m=+1253.670520547" Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.987593 4975 generic.go:334] "Generic (PLEG): container finished" podID="e622e272-9a72-488c-a99b-cc292d65b633" containerID="e1fd0446a6ed34c3f57892f12712fc0c51c5f68d5dd83a4251b5b15637467b43" exitCode=0 Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.987902 4975 generic.go:334] "Generic (PLEG): container finished" podID="e622e272-9a72-488c-a99b-cc292d65b633" containerID="98732f990566e868b063928455c3afb57aee86317836b5522d178dd70e91c889" exitCode=2 Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.987915 4975 generic.go:334] "Generic (PLEG): container finished" podID="e622e272-9a72-488c-a99b-cc292d65b633" containerID="ddedd5eb868748c98f7dfaefa9e1aa5981616bd6bfbc24ca64335968eaefd43f" exitCode=0 Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.987937 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerDied","Data":"e1fd0446a6ed34c3f57892f12712fc0c51c5f68d5dd83a4251b5b15637467b43"} Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.987968 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerDied","Data":"98732f990566e868b063928455c3afb57aee86317836b5522d178dd70e91c889"} Jan 22 10:57:41 crc kubenswrapper[4975]: I0122 10:57:41.987983 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerDied","Data":"ddedd5eb868748c98f7dfaefa9e1aa5981616bd6bfbc24ca64335968eaefd43f"} Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.103046 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.103487 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-httpd" containerID="cri-o://f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578" gracePeriod=30 Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.103675 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-log" containerID="cri-o://b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03" gracePeriod=30 Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.147950 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-n44m9"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.149083 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.183316 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-n44m9"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.263838 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ct85p"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.265741 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.290821 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ct85p"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.291835 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8fefa1-f558-4f60-80a2-053651036280-operator-scripts\") pod \"nova-api-db-create-n44m9\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.291952 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzsl\" (UniqueName: \"kubernetes.io/projected/8e8fefa1-f558-4f60-80a2-053651036280-kube-api-access-fwzsl\") pod \"nova-api-db-create-n44m9\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.355323 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5e0b-account-create-update-gtwjv"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.356820 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.365974 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z65jj"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.367353 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.368735 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.375024 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5e0b-account-create-update-gtwjv"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.393586 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwzsl\" (UniqueName: \"kubernetes.io/projected/8e8fefa1-f558-4f60-80a2-053651036280-kube-api-access-fwzsl\") pod \"nova-api-db-create-n44m9\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.393666 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdjf\" (UniqueName: \"kubernetes.io/projected/8a6f5135-980c-42f0-91fb-fcc46f662331-kube-api-access-bbdjf\") pod \"nova-cell0-db-create-ct85p\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.393717 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8fefa1-f558-4f60-80a2-053651036280-operator-scripts\") pod \"nova-api-db-create-n44m9\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.393732 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a6f5135-980c-42f0-91fb-fcc46f662331-operator-scripts\") pod \"nova-cell0-db-create-ct85p\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.394527 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8fefa1-f558-4f60-80a2-053651036280-operator-scripts\") pod \"nova-api-db-create-n44m9\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.394570 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z65jj"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.410966 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwzsl\" (UniqueName: \"kubernetes.io/projected/8e8fefa1-f558-4f60-80a2-053651036280-kube-api-access-fwzsl\") pod \"nova-api-db-create-n44m9\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.464504 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.495884 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67122ac-c96e-4dfb-9b46-59e75af7ecad-operator-scripts\") pod \"nova-cell1-db-create-z65jj\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.496224 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbdjf\" (UniqueName: \"kubernetes.io/projected/8a6f5135-980c-42f0-91fb-fcc46f662331-kube-api-access-bbdjf\") pod \"nova-cell0-db-create-ct85p\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.496261 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h299b\" (UniqueName: \"kubernetes.io/projected/c67122ac-c96e-4dfb-9b46-59e75af7ecad-kube-api-access-h299b\") pod \"nova-cell1-db-create-z65jj\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.496304 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a6f5135-980c-42f0-91fb-fcc46f662331-operator-scripts\") pod \"nova-cell0-db-create-ct85p\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.497231 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9a9f89-8c5f-4284-82d5-083fb294f744-operator-scripts\") pod \"nova-api-5e0b-account-create-update-gtwjv\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.497257 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a6f5135-980c-42f0-91fb-fcc46f662331-operator-scripts\") pod \"nova-cell0-db-create-ct85p\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.497267 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7dt\" (UniqueName: \"kubernetes.io/projected/5d9a9f89-8c5f-4284-82d5-083fb294f744-kube-api-access-hg7dt\") pod \"nova-api-5e0b-account-create-update-gtwjv\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.528765 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbdjf\" (UniqueName: \"kubernetes.io/projected/8a6f5135-980c-42f0-91fb-fcc46f662331-kube-api-access-bbdjf\") pod \"nova-cell0-db-create-ct85p\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.555116 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3e3d-account-create-update-gnkwp"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.556288 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.559858 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.569544 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3e3d-account-create-update-gnkwp"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.595687 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.603066 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67122ac-c96e-4dfb-9b46-59e75af7ecad-operator-scripts\") pod \"nova-cell1-db-create-z65jj\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.603159 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h299b\" (UniqueName: \"kubernetes.io/projected/c67122ac-c96e-4dfb-9b46-59e75af7ecad-kube-api-access-h299b\") pod \"nova-cell1-db-create-z65jj\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.603254 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9a9f89-8c5f-4284-82d5-083fb294f744-operator-scripts\") pod \"nova-api-5e0b-account-create-update-gtwjv\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.603290 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7dt\" (UniqueName: \"kubernetes.io/projected/5d9a9f89-8c5f-4284-82d5-083fb294f744-kube-api-access-hg7dt\") pod \"nova-api-5e0b-account-create-update-gtwjv\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.603890 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67122ac-c96e-4dfb-9b46-59e75af7ecad-operator-scripts\") pod \"nova-cell1-db-create-z65jj\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.604504 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9a9f89-8c5f-4284-82d5-083fb294f744-operator-scripts\") pod \"nova-api-5e0b-account-create-update-gtwjv\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.623148 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7dt\" (UniqueName: \"kubernetes.io/projected/5d9a9f89-8c5f-4284-82d5-083fb294f744-kube-api-access-hg7dt\") pod \"nova-api-5e0b-account-create-update-gtwjv\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.631218 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h299b\" (UniqueName: \"kubernetes.io/projected/c67122ac-c96e-4dfb-9b46-59e75af7ecad-kube-api-access-h299b\") pod \"nova-cell1-db-create-z65jj\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.671027 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.681824 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.704694 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649rb\" (UniqueName: \"kubernetes.io/projected/10ade78c-f910-4f71-98ef-6751e66bcd1e-kube-api-access-649rb\") pod \"nova-cell0-3e3d-account-create-update-gnkwp\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.705025 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ade78c-f910-4f71-98ef-6751e66bcd1e-operator-scripts\") pod \"nova-cell0-3e3d-account-create-update-gnkwp\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.771268 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-25c5-account-create-update-96jwm"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.772970 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.776608 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.782490 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-25c5-account-create-update-96jwm"] Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.808327 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ade78c-f910-4f71-98ef-6751e66bcd1e-operator-scripts\") pod \"nova-cell0-3e3d-account-create-update-gnkwp\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.808477 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649rb\" (UniqueName: \"kubernetes.io/projected/10ade78c-f910-4f71-98ef-6751e66bcd1e-kube-api-access-649rb\") pod \"nova-cell0-3e3d-account-create-update-gnkwp\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.808978 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ade78c-f910-4f71-98ef-6751e66bcd1e-operator-scripts\") pod \"nova-cell0-3e3d-account-create-update-gnkwp\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.829917 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649rb\" (UniqueName: \"kubernetes.io/projected/10ade78c-f910-4f71-98ef-6751e66bcd1e-kube-api-access-649rb\") pod \"nova-cell0-3e3d-account-create-update-gnkwp\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.909636 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eefbf47-b3c8-4491-b832-dd650404b5e4-operator-scripts\") pod \"nova-cell1-25c5-account-create-update-96jwm\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.909701 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtcg\" (UniqueName: \"kubernetes.io/projected/0eefbf47-b3c8-4491-b832-dd650404b5e4-kube-api-access-8xtcg\") pod \"nova-cell1-25c5-account-create-update-96jwm\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.919680 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:42 crc kubenswrapper[4975]: I0122 10:57:42.971985 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-n44m9"] Jan 22 10:57:42 crc kubenswrapper[4975]: W0122 10:57:42.982398 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8fefa1_f558_4f60_80a2_053651036280.slice/crio-3c0477be44b5dc2845dff5cfdc32e9486d4b45d321e286bb60e2e1c48bf83e40 WatchSource:0}: Error finding container 3c0477be44b5dc2845dff5cfdc32e9486d4b45d321e286bb60e2e1c48bf83e40: Status 404 returned error can't find the container with id 3c0477be44b5dc2845dff5cfdc32e9486d4b45d321e286bb60e2e1c48bf83e40 Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.001837 4975 generic.go:334] "Generic (PLEG): container finished" podID="465645fc-a907-44d6-8d2b-30e29b40187b" containerID="b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03" exitCode=143 Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.001906 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465645fc-a907-44d6-8d2b-30e29b40187b","Type":"ContainerDied","Data":"b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03"} Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.003936 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n44m9" event={"ID":"8e8fefa1-f558-4f60-80a2-053651036280","Type":"ContainerStarted","Data":"3c0477be44b5dc2845dff5cfdc32e9486d4b45d321e286bb60e2e1c48bf83e40"} Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.008766 4975 generic.go:334] "Generic (PLEG): container finished" podID="c2656030-bade-452d-a4cb-ba4c12935750" containerID="3313ee02c50a816d3513f2cf5bb15cd21c422a33ae76ce77a75d5636bf912605" exitCode=0 Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.008798 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c2656030-bade-452d-a4cb-ba4c12935750","Type":"ContainerDied","Data":"3313ee02c50a816d3513f2cf5bb15cd21c422a33ae76ce77a75d5636bf912605"} Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.011869 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eefbf47-b3c8-4491-b832-dd650404b5e4-operator-scripts\") pod \"nova-cell1-25c5-account-create-update-96jwm\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.011947 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtcg\" (UniqueName: \"kubernetes.io/projected/0eefbf47-b3c8-4491-b832-dd650404b5e4-kube-api-access-8xtcg\") pod \"nova-cell1-25c5-account-create-update-96jwm\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.013177 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eefbf47-b3c8-4491-b832-dd650404b5e4-operator-scripts\") pod \"nova-cell1-25c5-account-create-update-96jwm\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.030737 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtcg\" (UniqueName: \"kubernetes.io/projected/0eefbf47-b3c8-4491-b832-dd650404b5e4-kube-api-access-8xtcg\") pod \"nova-cell1-25c5-account-create-update-96jwm\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.104249 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.124455 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ct85p"] Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.215428 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z65jj"] Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.236142 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5e0b-account-create-update-gtwjv"] Jan 22 10:57:43 crc kubenswrapper[4975]: W0122 10:57:43.255816 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d9a9f89_8c5f_4284_82d5_083fb294f744.slice/crio-3785f95fb75c255268faa1007830ef2c21723f78aac042ca9708994bd73aa37d WatchSource:0}: Error finding container 3785f95fb75c255268faa1007830ef2c21723f78aac042ca9708994bd73aa37d: Status 404 returned error can't find the container with id 3785f95fb75c255268faa1007830ef2c21723f78aac042ca9708994bd73aa37d Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.486039 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3e3d-account-create-update-gnkwp"] Jan 22 10:57:43 crc kubenswrapper[4975]: W0122 10:57:43.624154 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10ade78c_f910_4f71_98ef_6751e66bcd1e.slice/crio-82e79ddd5d9ae5fe678583490198358f48dc0d45addce8aed936c3a89110910c WatchSource:0}: Error finding container 82e79ddd5d9ae5fe678583490198358f48dc0d45addce8aed936c3a89110910c: Status 404 returned error can't find the container with id 82e79ddd5d9ae5fe678583490198358f48dc0d45addce8aed936c3a89110910c Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.625772 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.693144 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-25c5-account-create-update-96jwm"] Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.731952 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-logs\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732046 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-httpd-run\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732127 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-combined-ca-bundle\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732212 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-scripts\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732262 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-config-data\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732321 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732433 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-public-tls-certs\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.732476 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/c2656030-bade-452d-a4cb-ba4c12935750-kube-api-access-76fj8\") pod \"c2656030-bade-452d-a4cb-ba4c12935750\" (UID: \"c2656030-bade-452d-a4cb-ba4c12935750\") " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.733745 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-logs" (OuterVolumeSpecName: "logs") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.734056 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.738659 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-scripts" (OuterVolumeSpecName: "scripts") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.740198 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2656030-bade-452d-a4cb-ba4c12935750-kube-api-access-76fj8" (OuterVolumeSpecName: "kube-api-access-76fj8") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "kube-api-access-76fj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.743197 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.836358 4975 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.836393 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.836425 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.836441 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/c2656030-bade-452d-a4cb-ba4c12935750-kube-api-access-76fj8\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.836456 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2656030-bade-452d-a4cb-ba4c12935750-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.873849 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.877453 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.902684 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-config-data" (OuterVolumeSpecName: "config-data") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.913227 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c2656030-bade-452d-a4cb-ba4c12935750" (UID: "c2656030-bade-452d-a4cb-ba4c12935750"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.957274 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.957362 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.957384 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:43 crc kubenswrapper[4975]: I0122 10:57:43.957410 4975 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2656030-bade-452d-a4cb-ba4c12935750-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.017695 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" event={"ID":"0eefbf47-b3c8-4491-b832-dd650404b5e4","Type":"ContainerStarted","Data":"117a7d6732af42c07c047ce0ef16f6ffec8c3f05e178b3e15323a7842e9e5705"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.019554 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n44m9" event={"ID":"8e8fefa1-f558-4f60-80a2-053651036280","Type":"ContainerStarted","Data":"4216a7271788481db4cef95838b69d2e8b3ccff9b021c9c6a8f32b981a488540"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.022830 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.023097 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c2656030-bade-452d-a4cb-ba4c12935750","Type":"ContainerDied","Data":"6a24b19ff1b21047200bd354e933e08efa2fcd91aaac0ffa2fbb60e2caee51e2"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.023159 4975 scope.go:117] "RemoveContainer" containerID="3313ee02c50a816d3513f2cf5bb15cd21c422a33ae76ce77a75d5636bf912605" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.024150 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" event={"ID":"5d9a9f89-8c5f-4284-82d5-083fb294f744","Type":"ContainerStarted","Data":"3785f95fb75c255268faa1007830ef2c21723f78aac042ca9708994bd73aa37d"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.025715 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" event={"ID":"10ade78c-f910-4f71-98ef-6751e66bcd1e","Type":"ContainerStarted","Data":"82e79ddd5d9ae5fe678583490198358f48dc0d45addce8aed936c3a89110910c"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.026948 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ct85p" event={"ID":"8a6f5135-980c-42f0-91fb-fcc46f662331","Type":"ContainerStarted","Data":"88319f0ade3dd2061c830c7791a73dd560c19ac8dd4b5c7303e8b3aeefffc8a8"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.028544 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z65jj" event={"ID":"c67122ac-c96e-4dfb-9b46-59e75af7ecad","Type":"ContainerStarted","Data":"c7263093011245b466314331b697c4c46d6507a852e4285db89c6d83cd97d981"} Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.037655 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-n44m9" podStartSLOduration=2.037637344 podStartE2EDuration="2.037637344s" podCreationTimestamp="2026-01-22 10:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:44.035101853 +0000 UTC m=+1256.693137973" watchObservedRunningTime="2026-01-22 10:57:44.037637344 +0000 UTC m=+1256.695673454" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.063177 4975 scope.go:117] "RemoveContainer" containerID="49adfbb458fcdae3bdc711778acedac8940b15179524bb30862fa2112098c145" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.081463 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.094166 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.104612 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:57:44 crc kubenswrapper[4975]: E0122 10:57:44.105060 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-log" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.105078 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-log" Jan 22 10:57:44 crc kubenswrapper[4975]: E0122 10:57:44.105106 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-httpd" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.105112 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-httpd" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.105275 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-log" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.105292 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2656030-bade-452d-a4cb-ba4c12935750" containerName="glance-httpd" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.106229 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.125064 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.125284 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.130249 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.261743 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262096 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262183 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262229 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-scripts\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262311 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-config-data\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262460 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjt5k\" (UniqueName: \"kubernetes.io/projected/e68830c0-793a-4984-9a65-427420074a97-kube-api-access-cjt5k\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262588 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e68830c0-793a-4984-9a65-427420074a97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.262640 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68830c0-793a-4984-9a65-427420074a97-logs\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364269 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e68830c0-793a-4984-9a65-427420074a97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364364 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68830c0-793a-4984-9a65-427420074a97-logs\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364402 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364465 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364556 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364613 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-scripts\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364643 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-config-data\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364719 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjt5k\" (UniqueName: \"kubernetes.io/projected/e68830c0-793a-4984-9a65-427420074a97-kube-api-access-cjt5k\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.364785 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e68830c0-793a-4984-9a65-427420074a97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.365386 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.366284 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68830c0-793a-4984-9a65-427420074a97-logs\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.370665 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-config-data\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.375730 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-scripts\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.376285 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.384794 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjt5k\" (UniqueName: \"kubernetes.io/projected/e68830c0-793a-4984-9a65-427420074a97-kube-api-access-cjt5k\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.385448 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68830c0-793a-4984-9a65-427420074a97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.390413 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"e68830c0-793a-4984-9a65-427420074a97\") " pod="openstack/glance-default-external-api-0" Jan 22 10:57:44 crc kubenswrapper[4975]: I0122 10:57:44.445731 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.185755 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:57:45 crc kubenswrapper[4975]: W0122 10:57:45.188281 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68830c0_793a_4984_9a65_427420074a97.slice/crio-4a852c72dddaf3ed55433fed851ef8eccdc7055fcfb85350127c0a3cb4e2ced2 WatchSource:0}: Error finding container 4a852c72dddaf3ed55433fed851ef8eccdc7055fcfb85350127c0a3cb4e2ced2: Status 404 returned error can't find the container with id 4a852c72dddaf3ed55433fed851ef8eccdc7055fcfb85350127c0a3cb4e2ced2 Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.766464 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2656030-bade-452d-a4cb-ba4c12935750" path="/var/lib/kubelet/pods/c2656030-bade-452d-a4cb-ba4c12935750/volumes" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.791498 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894590 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-internal-tls-certs\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894733 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-scripts\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894781 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9fzv\" (UniqueName: \"kubernetes.io/projected/465645fc-a907-44d6-8d2b-30e29b40187b-kube-api-access-k9fzv\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894804 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-logs\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894854 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-config-data\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894884 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-combined-ca-bundle\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894932 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-httpd-run\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.894975 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"465645fc-a907-44d6-8d2b-30e29b40187b\" (UID: \"465645fc-a907-44d6-8d2b-30e29b40187b\") " Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.896269 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-logs" (OuterVolumeSpecName: "logs") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.896970 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.910794 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.927524 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-scripts" (OuterVolumeSpecName: "scripts") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.928494 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:57:45 crc kubenswrapper[4975]: I0122 10:57:45.950527 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465645fc-a907-44d6-8d2b-30e29b40187b-kube-api-access-k9fzv" (OuterVolumeSpecName: "kube-api-access-k9fzv") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "kube-api-access-k9fzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:45.998087 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9fzv\" (UniqueName: \"kubernetes.io/projected/465645fc-a907-44d6-8d2b-30e29b40187b-kube-api-access-k9fzv\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:45.998297 4975 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/465645fc-a907-44d6-8d2b-30e29b40187b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:45.998316 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:45.998340 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.010483 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.027461 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-config-data" (OuterVolumeSpecName: "config-data") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.046884 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.051745 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "465645fc-a907-44d6-8d2b-30e29b40187b" (UID: "465645fc-a907-44d6-8d2b-30e29b40187b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.057703 4975 generic.go:334] "Generic (PLEG): container finished" podID="8e8fefa1-f558-4f60-80a2-053651036280" containerID="4216a7271788481db4cef95838b69d2e8b3ccff9b021c9c6a8f32b981a488540" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.057759 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n44m9" event={"ID":"8e8fefa1-f558-4f60-80a2-053651036280","Type":"ContainerDied","Data":"4216a7271788481db4cef95838b69d2e8b3ccff9b021c9c6a8f32b981a488540"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.060429 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e68830c0-793a-4984-9a65-427420074a97","Type":"ContainerStarted","Data":"4a852c72dddaf3ed55433fed851ef8eccdc7055fcfb85350127c0a3cb4e2ced2"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.062187 4975 generic.go:334] "Generic (PLEG): container finished" podID="5d9a9f89-8c5f-4284-82d5-083fb294f744" containerID="008495f8ac806fa58ef0aaf021d944d50e5b2b93dcc819ecd082de554f6dc95b" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.062226 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" event={"ID":"5d9a9f89-8c5f-4284-82d5-083fb294f744","Type":"ContainerDied","Data":"008495f8ac806fa58ef0aaf021d944d50e5b2b93dcc819ecd082de554f6dc95b"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.063182 4975 generic.go:334] "Generic (PLEG): container finished" podID="10ade78c-f910-4f71-98ef-6751e66bcd1e" containerID="70e48d0de8b77be468d6210a2d02f13c3bece4f0f0b97fd2f3e79a945e29ee27" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.063209 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" event={"ID":"10ade78c-f910-4f71-98ef-6751e66bcd1e","Type":"ContainerDied","Data":"70e48d0de8b77be468d6210a2d02f13c3bece4f0f0b97fd2f3e79a945e29ee27"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.075448 4975 generic.go:334] "Generic (PLEG): container finished" podID="8a6f5135-980c-42f0-91fb-fcc46f662331" containerID="d602bc37ffa546cf9882c92428843419799b1c297b338c1a8ad7ecc2632418f8" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.075536 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ct85p" event={"ID":"8a6f5135-980c-42f0-91fb-fcc46f662331","Type":"ContainerDied","Data":"d602bc37ffa546cf9882c92428843419799b1c297b338c1a8ad7ecc2632418f8"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.077574 4975 generic.go:334] "Generic (PLEG): container finished" podID="c67122ac-c96e-4dfb-9b46-59e75af7ecad" containerID="4b6b92a96a5f6936ddac14f154cfc5669dd985c8533925d7f3c1a2f283c0e190" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.077743 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z65jj" event={"ID":"c67122ac-c96e-4dfb-9b46-59e75af7ecad","Type":"ContainerDied","Data":"4b6b92a96a5f6936ddac14f154cfc5669dd985c8533925d7f3c1a2f283c0e190"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.083313 4975 generic.go:334] "Generic (PLEG): container finished" podID="465645fc-a907-44d6-8d2b-30e29b40187b" containerID="f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.083391 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465645fc-a907-44d6-8d2b-30e29b40187b","Type":"ContainerDied","Data":"f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.083417 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"465645fc-a907-44d6-8d2b-30e29b40187b","Type":"ContainerDied","Data":"4b8a29cca6111a762f9c729a5939124f82c8dc23ba7afa02d09cd6de7ea411d3"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.083435 4975 scope.go:117] "RemoveContainer" containerID="f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.083529 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.092300 4975 generic.go:334] "Generic (PLEG): container finished" podID="0eefbf47-b3c8-4491-b832-dd650404b5e4" containerID="895f18587f09c87d7fa49a23ca9a99af702529597aa194c11f68e1f788fb8c0f" exitCode=0 Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.092367 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" event={"ID":"0eefbf47-b3c8-4491-b832-dd650404b5e4","Type":"ContainerDied","Data":"895f18587f09c87d7fa49a23ca9a99af702529597aa194c11f68e1f788fb8c0f"} Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.099510 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.099538 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.099549 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.099578 4975 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465645fc-a907-44d6-8d2b-30e29b40187b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.113531 4975 scope.go:117] "RemoveContainer" containerID="b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.138728 4975 scope.go:117] "RemoveContainer" containerID="f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578" Jan 22 10:57:46 crc kubenswrapper[4975]: E0122 10:57:46.142101 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578\": container with ID starting with f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578 not found: ID does not exist" containerID="f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.142243 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578"} err="failed to get container status \"f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578\": rpc error: code = NotFound desc = could not find container \"f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578\": container with ID starting with f9a7a6482d0191a2c59e0a7857500e98141e88e8297f372a8622904c7b273578 not found: ID does not exist" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.142322 4975 scope.go:117] "RemoveContainer" containerID="b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03" Jan 22 10:57:46 crc kubenswrapper[4975]: E0122 10:57:46.143072 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03\": container with ID starting with b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03 not found: ID does not exist" containerID="b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.143106 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03"} err="failed to get container status \"b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03\": rpc error: code = NotFound desc = could not find container \"b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03\": container with ID starting with b7748e8fdadd05ef47c0c2f223b67815db68051d9b3ccb3713f58b40023c8b03 not found: ID does not exist" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.202496 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.219411 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.225075 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:57:46 crc kubenswrapper[4975]: E0122 10:57:46.225478 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-log" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.225493 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-log" Jan 22 10:57:46 crc kubenswrapper[4975]: E0122 10:57:46.225508 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-httpd" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.225515 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-httpd" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.225710 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-log" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.225725 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" containerName="glance-httpd" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.226653 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.229295 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.229591 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.230280 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407541 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/347d672f-7247-4f3d-89dd-f57fa3e0c628-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407592 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407645 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-scripts\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407678 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-config-data\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407735 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347d672f-7247-4f3d-89dd-f57fa3e0c628-logs\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407750 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407774 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.407788 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56l2d\" (UniqueName: \"kubernetes.io/projected/347d672f-7247-4f3d-89dd-f57fa3e0c628-kube-api-access-56l2d\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.509397 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-config-data\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.509831 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347d672f-7247-4f3d-89dd-f57fa3e0c628-logs\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.509963 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510097 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510224 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56l2d\" (UniqueName: \"kubernetes.io/projected/347d672f-7247-4f3d-89dd-f57fa3e0c628-kube-api-access-56l2d\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510380 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/347d672f-7247-4f3d-89dd-f57fa3e0c628-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510476 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510636 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-scripts\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510702 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347d672f-7247-4f3d-89dd-f57fa3e0c628-logs\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.510854 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.511376 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/347d672f-7247-4f3d-89dd-f57fa3e0c628-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.515545 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-config-data\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.516002 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-scripts\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.516188 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.531028 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347d672f-7247-4f3d-89dd-f57fa3e0c628-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.538091 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56l2d\" (UniqueName: \"kubernetes.io/projected/347d672f-7247-4f3d-89dd-f57fa3e0c628-kube-api-access-56l2d\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.545305 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"347d672f-7247-4f3d-89dd-f57fa3e0c628\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:57:46 crc kubenswrapper[4975]: I0122 10:57:46.601993 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.102660 4975 generic.go:334] "Generic (PLEG): container finished" podID="e622e272-9a72-488c-a99b-cc292d65b633" containerID="a56cc525975f1020bba3ddc1eb5e4efc57c03a15690d0c54a727679acb46b609" exitCode=0 Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.102985 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerDied","Data":"a56cc525975f1020bba3ddc1eb5e4efc57c03a15690d0c54a727679acb46b609"} Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.110288 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e68830c0-793a-4984-9a65-427420074a97","Type":"ContainerStarted","Data":"b6e2eaf320b51e9718e86479a9e9b622b7f149f1b8bf3d72efd6f7cc3df6447a"} Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.110352 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e68830c0-793a-4984-9a65-427420074a97","Type":"ContainerStarted","Data":"eed4ca0218e1de2df686a9981617c726442a71bbcab9b7f8fdd420d6d2d413e4"} Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.148791 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.14876688 podStartE2EDuration="3.14876688s" podCreationTimestamp="2026-01-22 10:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:47.130826081 +0000 UTC m=+1259.788862211" watchObservedRunningTime="2026-01-22 10:57:47.14876688 +0000 UTC m=+1259.806803020" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.192610 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.259869 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328210 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-sg-core-conf-yaml\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328615 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fj9s\" (UniqueName: \"kubernetes.io/projected/e622e272-9a72-488c-a99b-cc292d65b633-kube-api-access-2fj9s\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328651 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-config-data\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328745 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-scripts\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328784 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-combined-ca-bundle\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328812 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-log-httpd\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.328907 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-run-httpd\") pod \"e622e272-9a72-488c-a99b-cc292d65b633\" (UID: \"e622e272-9a72-488c-a99b-cc292d65b633\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.329664 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.332479 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.339622 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-scripts" (OuterVolumeSpecName: "scripts") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.339748 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e622e272-9a72-488c-a99b-cc292d65b633-kube-api-access-2fj9s" (OuterVolumeSpecName: "kube-api-access-2fj9s") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "kube-api-access-2fj9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.430563 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.430594 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e622e272-9a72-488c-a99b-cc292d65b633-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.430604 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fj9s\" (UniqueName: \"kubernetes.io/projected/e622e272-9a72-488c-a99b-cc292d65b633-kube-api-access-2fj9s\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.430613 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.449927 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.467104 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.486041 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.531854 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.531884 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.547172 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-config-data" (OuterVolumeSpecName: "config-data") pod "e622e272-9a72-488c-a99b-cc292d65b633" (UID: "e622e272-9a72-488c-a99b-cc292d65b633"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.634890 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg7dt\" (UniqueName: \"kubernetes.io/projected/5d9a9f89-8c5f-4284-82d5-083fb294f744-kube-api-access-hg7dt\") pod \"5d9a9f89-8c5f-4284-82d5-083fb294f744\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.635030 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9a9f89-8c5f-4284-82d5-083fb294f744-operator-scripts\") pod \"5d9a9f89-8c5f-4284-82d5-083fb294f744\" (UID: \"5d9a9f89-8c5f-4284-82d5-083fb294f744\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.635579 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e622e272-9a72-488c-a99b-cc292d65b633-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.636095 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d9a9f89-8c5f-4284-82d5-083fb294f744-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d9a9f89-8c5f-4284-82d5-083fb294f744" (UID: "5d9a9f89-8c5f-4284-82d5-083fb294f744"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.639935 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d9a9f89-8c5f-4284-82d5-083fb294f744-kube-api-access-hg7dt" (OuterVolumeSpecName: "kube-api-access-hg7dt") pod "5d9a9f89-8c5f-4284-82d5-083fb294f744" (UID: "5d9a9f89-8c5f-4284-82d5-083fb294f744"). InnerVolumeSpecName "kube-api-access-hg7dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.673485 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.698171 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.716656 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.733052 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.737484 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9a9f89-8c5f-4284-82d5-083fb294f744-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.737515 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg7dt\" (UniqueName: \"kubernetes.io/projected/5d9a9f89-8c5f-4284-82d5-083fb294f744-kube-api-access-hg7dt\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.747879 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.781405 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465645fc-a907-44d6-8d2b-30e29b40187b" path="/var/lib/kubelet/pods/465645fc-a907-44d6-8d2b-30e29b40187b/volumes" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841373 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8fefa1-f558-4f60-80a2-053651036280-operator-scripts\") pod \"8e8fefa1-f558-4f60-80a2-053651036280\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841455 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649rb\" (UniqueName: \"kubernetes.io/projected/10ade78c-f910-4f71-98ef-6751e66bcd1e-kube-api-access-649rb\") pod \"10ade78c-f910-4f71-98ef-6751e66bcd1e\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841513 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eefbf47-b3c8-4491-b832-dd650404b5e4-operator-scripts\") pod \"0eefbf47-b3c8-4491-b832-dd650404b5e4\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841613 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwzsl\" (UniqueName: \"kubernetes.io/projected/8e8fefa1-f558-4f60-80a2-053651036280-kube-api-access-fwzsl\") pod \"8e8fefa1-f558-4f60-80a2-053651036280\" (UID: \"8e8fefa1-f558-4f60-80a2-053651036280\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841694 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xtcg\" (UniqueName: \"kubernetes.io/projected/0eefbf47-b3c8-4491-b832-dd650404b5e4-kube-api-access-8xtcg\") pod \"0eefbf47-b3c8-4491-b832-dd650404b5e4\" (UID: \"0eefbf47-b3c8-4491-b832-dd650404b5e4\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841774 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67122ac-c96e-4dfb-9b46-59e75af7ecad-operator-scripts\") pod \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841840 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ade78c-f910-4f71-98ef-6751e66bcd1e-operator-scripts\") pod \"10ade78c-f910-4f71-98ef-6751e66bcd1e\" (UID: \"10ade78c-f910-4f71-98ef-6751e66bcd1e\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841861 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h299b\" (UniqueName: \"kubernetes.io/projected/c67122ac-c96e-4dfb-9b46-59e75af7ecad-kube-api-access-h299b\") pod \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\" (UID: \"c67122ac-c96e-4dfb-9b46-59e75af7ecad\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841919 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8fefa1-f558-4f60-80a2-053651036280-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e8fefa1-f558-4f60-80a2-053651036280" (UID: "8e8fefa1-f558-4f60-80a2-053651036280"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.841953 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a6f5135-980c-42f0-91fb-fcc46f662331-operator-scripts\") pod \"8a6f5135-980c-42f0-91fb-fcc46f662331\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.842005 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbdjf\" (UniqueName: \"kubernetes.io/projected/8a6f5135-980c-42f0-91fb-fcc46f662331-kube-api-access-bbdjf\") pod \"8a6f5135-980c-42f0-91fb-fcc46f662331\" (UID: \"8a6f5135-980c-42f0-91fb-fcc46f662331\") " Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.842581 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eefbf47-b3c8-4491-b832-dd650404b5e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eefbf47-b3c8-4491-b832-dd650404b5e4" (UID: "0eefbf47-b3c8-4491-b832-dd650404b5e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.842892 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8fefa1-f558-4f60-80a2-053651036280-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.842911 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eefbf47-b3c8-4491-b832-dd650404b5e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.843184 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10ade78c-f910-4f71-98ef-6751e66bcd1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10ade78c-f910-4f71-98ef-6751e66bcd1e" (UID: "10ade78c-f910-4f71-98ef-6751e66bcd1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.843275 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c67122ac-c96e-4dfb-9b46-59e75af7ecad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c67122ac-c96e-4dfb-9b46-59e75af7ecad" (UID: "c67122ac-c96e-4dfb-9b46-59e75af7ecad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.843478 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a6f5135-980c-42f0-91fb-fcc46f662331-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a6f5135-980c-42f0-91fb-fcc46f662331" (UID: "8a6f5135-980c-42f0-91fb-fcc46f662331"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.845382 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eefbf47-b3c8-4491-b832-dd650404b5e4-kube-api-access-8xtcg" (OuterVolumeSpecName: "kube-api-access-8xtcg") pod "0eefbf47-b3c8-4491-b832-dd650404b5e4" (UID: "0eefbf47-b3c8-4491-b832-dd650404b5e4"). InnerVolumeSpecName "kube-api-access-8xtcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.849444 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67122ac-c96e-4dfb-9b46-59e75af7ecad-kube-api-access-h299b" (OuterVolumeSpecName: "kube-api-access-h299b") pod "c67122ac-c96e-4dfb-9b46-59e75af7ecad" (UID: "c67122ac-c96e-4dfb-9b46-59e75af7ecad"). InnerVolumeSpecName "kube-api-access-h299b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.849529 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ade78c-f910-4f71-98ef-6751e66bcd1e-kube-api-access-649rb" (OuterVolumeSpecName: "kube-api-access-649rb") pod "10ade78c-f910-4f71-98ef-6751e66bcd1e" (UID: "10ade78c-f910-4f71-98ef-6751e66bcd1e"). InnerVolumeSpecName "kube-api-access-649rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.852629 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8fefa1-f558-4f60-80a2-053651036280-kube-api-access-fwzsl" (OuterVolumeSpecName: "kube-api-access-fwzsl") pod "8e8fefa1-f558-4f60-80a2-053651036280" (UID: "8e8fefa1-f558-4f60-80a2-053651036280"). InnerVolumeSpecName "kube-api-access-fwzsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.856734 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6f5135-980c-42f0-91fb-fcc46f662331-kube-api-access-bbdjf" (OuterVolumeSpecName: "kube-api-access-bbdjf") pod "8a6f5135-980c-42f0-91fb-fcc46f662331" (UID: "8a6f5135-980c-42f0-91fb-fcc46f662331"). InnerVolumeSpecName "kube-api-access-bbdjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944638 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xtcg\" (UniqueName: \"kubernetes.io/projected/0eefbf47-b3c8-4491-b832-dd650404b5e4-kube-api-access-8xtcg\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944684 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67122ac-c96e-4dfb-9b46-59e75af7ecad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944703 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ade78c-f910-4f71-98ef-6751e66bcd1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944720 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h299b\" (UniqueName: \"kubernetes.io/projected/c67122ac-c96e-4dfb-9b46-59e75af7ecad-kube-api-access-h299b\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944738 4975 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a6f5135-980c-42f0-91fb-fcc46f662331-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944782 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbdjf\" (UniqueName: \"kubernetes.io/projected/8a6f5135-980c-42f0-91fb-fcc46f662331-kube-api-access-bbdjf\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944795 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-649rb\" (UniqueName: \"kubernetes.io/projected/10ade78c-f910-4f71-98ef-6751e66bcd1e-kube-api-access-649rb\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:47 crc kubenswrapper[4975]: I0122 10:57:47.944808 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwzsl\" (UniqueName: \"kubernetes.io/projected/8e8fefa1-f558-4f60-80a2-053651036280-kube-api-access-fwzsl\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.118613 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" event={"ID":"10ade78c-f910-4f71-98ef-6751e66bcd1e","Type":"ContainerDied","Data":"82e79ddd5d9ae5fe678583490198358f48dc0d45addce8aed936c3a89110910c"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.118662 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82e79ddd5d9ae5fe678583490198358f48dc0d45addce8aed936c3a89110910c" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.119056 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3e3d-account-create-update-gnkwp" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.120552 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ct85p" event={"ID":"8a6f5135-980c-42f0-91fb-fcc46f662331","Type":"ContainerDied","Data":"88319f0ade3dd2061c830c7791a73dd560c19ac8dd4b5c7303e8b3aeefffc8a8"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.120597 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88319f0ade3dd2061c830c7791a73dd560c19ac8dd4b5c7303e8b3aeefffc8a8" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.120575 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ct85p" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.122497 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z65jj" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.122591 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z65jj" event={"ID":"c67122ac-c96e-4dfb-9b46-59e75af7ecad","Type":"ContainerDied","Data":"c7263093011245b466314331b697c4c46d6507a852e4285db89c6d83cd97d981"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.122630 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7263093011245b466314331b697c4c46d6507a852e4285db89c6d83cd97d981" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.124117 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" event={"ID":"0eefbf47-b3c8-4491-b832-dd650404b5e4","Type":"ContainerDied","Data":"117a7d6732af42c07c047ce0ef16f6ffec8c3f05e178b3e15323a7842e9e5705"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.124197 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117a7d6732af42c07c047ce0ef16f6ffec8c3f05e178b3e15323a7842e9e5705" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.124141 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-25c5-account-create-update-96jwm" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.125547 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n44m9" event={"ID":"8e8fefa1-f558-4f60-80a2-053651036280","Type":"ContainerDied","Data":"3c0477be44b5dc2845dff5cfdc32e9486d4b45d321e286bb60e2e1c48bf83e40"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.125572 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n44m9" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.125585 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0477be44b5dc2845dff5cfdc32e9486d4b45d321e286bb60e2e1c48bf83e40" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.128365 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e622e272-9a72-488c-a99b-cc292d65b633","Type":"ContainerDied","Data":"87dfea9691e5c2c24800c25dcd31d5d55d30d389d87f204c207d4e9b2bd5c5d8"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.128395 4975 scope.go:117] "RemoveContainer" containerID="e1fd0446a6ed34c3f57892f12712fc0c51c5f68d5dd83a4251b5b15637467b43" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.128508 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.131883 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"347d672f-7247-4f3d-89dd-f57fa3e0c628","Type":"ContainerStarted","Data":"cd6420e7c6e0f381c24a8691209e408869a46399b9286d76b04fbf4b0c92c918"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.136487 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.137008 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5e0b-account-create-update-gtwjv" event={"ID":"5d9a9f89-8c5f-4284-82d5-083fb294f744","Type":"ContainerDied","Data":"3785f95fb75c255268faa1007830ef2c21723f78aac042ca9708994bd73aa37d"} Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.137036 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3785f95fb75c255268faa1007830ef2c21723f78aac042ca9708994bd73aa37d" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.157011 4975 scope.go:117] "RemoveContainer" containerID="98732f990566e868b063928455c3afb57aee86317836b5522d178dd70e91c889" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.174082 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.203967 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.205597 4975 scope.go:117] "RemoveContainer" containerID="ddedd5eb868748c98f7dfaefa9e1aa5981616bd6bfbc24ca64335968eaefd43f" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218177 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218643 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-notification-agent" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218665 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-notification-agent" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218681 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-central-agent" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218689 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-central-agent" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218699 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eefbf47-b3c8-4491-b832-dd650404b5e4" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218707 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eefbf47-b3c8-4491-b832-dd650404b5e4" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218723 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="sg-core" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218730 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="sg-core" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218747 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8fefa1-f558-4f60-80a2-053651036280" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218754 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8fefa1-f558-4f60-80a2-053651036280" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218773 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="proxy-httpd" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218779 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="proxy-httpd" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218792 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6f5135-980c-42f0-91fb-fcc46f662331" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218800 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6f5135-980c-42f0-91fb-fcc46f662331" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218813 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ade78c-f910-4f71-98ef-6751e66bcd1e" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218823 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ade78c-f910-4f71-98ef-6751e66bcd1e" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218837 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67122ac-c96e-4dfb-9b46-59e75af7ecad" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218843 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67122ac-c96e-4dfb-9b46-59e75af7ecad" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: E0122 10:57:48.218852 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9a9f89-8c5f-4284-82d5-083fb294f744" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.218858 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9a9f89-8c5f-4284-82d5-083fb294f744" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219033 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8fefa1-f558-4f60-80a2-053651036280" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219051 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-central-agent" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219065 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eefbf47-b3c8-4491-b832-dd650404b5e4" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219079 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ade78c-f910-4f71-98ef-6751e66bcd1e" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219091 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="proxy-httpd" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219100 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6f5135-980c-42f0-91fb-fcc46f662331" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219109 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="sg-core" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219119 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d9a9f89-8c5f-4284-82d5-083fb294f744" containerName="mariadb-account-create-update" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219134 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622e272-9a72-488c-a99b-cc292d65b633" containerName="ceilometer-notification-agent" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.219142 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67122ac-c96e-4dfb-9b46-59e75af7ecad" containerName="mariadb-database-create" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.220647 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.230137 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.232734 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.233446 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.258002 4975 scope.go:117] "RemoveContainer" containerID="a56cc525975f1020bba3ddc1eb5e4efc57c03a15690d0c54a727679acb46b609" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.351593 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-scripts\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.352133 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.352490 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-config-data\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.352600 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.352630 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhzl\" (UniqueName: \"kubernetes.io/projected/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-kube-api-access-tvhzl\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.352736 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.352808 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454801 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454882 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-config-data\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454916 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454935 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvhzl\" (UniqueName: \"kubernetes.io/projected/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-kube-api-access-tvhzl\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454955 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454970 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.454995 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-scripts\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.455631 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.455727 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.460792 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-scripts\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.461047 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-config-data\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.461736 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.461972 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.470261 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvhzl\" (UniqueName: \"kubernetes.io/projected/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-kube-api-access-tvhzl\") pod \"ceilometer-0\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " pod="openstack/ceilometer-0" Jan 22 10:57:48 crc kubenswrapper[4975]: I0122 10:57:48.618052 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:57:49 crc kubenswrapper[4975]: I0122 10:57:49.135616 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:49 crc kubenswrapper[4975]: I0122 10:57:49.146446 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"347d672f-7247-4f3d-89dd-f57fa3e0c628","Type":"ContainerStarted","Data":"a2aee522bd3ac4a755026c7adee82c3317ebd87a11a6a02d592c4330a20c3e1a"} Jan 22 10:57:49 crc kubenswrapper[4975]: W0122 10:57:49.149809 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode44fdcc8_192c_4ae1_ac5d_e5b4117ef9dd.slice/crio-2f4d7bd9e2f6cacfa81f2fca6ad9bbdde088f71da5a616cbe92538c08f039295 WatchSource:0}: Error finding container 2f4d7bd9e2f6cacfa81f2fca6ad9bbdde088f71da5a616cbe92538c08f039295: Status 404 returned error can't find the container with id 2f4d7bd9e2f6cacfa81f2fca6ad9bbdde088f71da5a616cbe92538c08f039295 Jan 22 10:57:49 crc kubenswrapper[4975]: I0122 10:57:49.773319 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e622e272-9a72-488c-a99b-cc292d65b633" path="/var/lib/kubelet/pods/e622e272-9a72-488c-a99b-cc292d65b633/volumes" Jan 22 10:57:50 crc kubenswrapper[4975]: I0122 10:57:50.158671 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerStarted","Data":"2f4d7bd9e2f6cacfa81f2fca6ad9bbdde088f71da5a616cbe92538c08f039295"} Jan 22 10:57:51 crc kubenswrapper[4975]: I0122 10:57:51.167723 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"347d672f-7247-4f3d-89dd-f57fa3e0c628","Type":"ContainerStarted","Data":"9e2ccd2b50975c6a50a7a10e08bd01011c09ccbf923bdeb56bd3816573362cb7"} Jan 22 10:57:51 crc kubenswrapper[4975]: I0122 10:57:51.169038 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerStarted","Data":"a6a3ada5e7fc835ca309b5f58e961d281216035d73af7e6942fc0d50fa8714cd"} Jan 22 10:57:51 crc kubenswrapper[4975]: I0122 10:57:51.186652 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.186633629 podStartE2EDuration="5.186633629s" podCreationTimestamp="2026-01-22 10:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:57:51.184611893 +0000 UTC m=+1263.842648003" watchObservedRunningTime="2026-01-22 10:57:51.186633629 +0000 UTC m=+1263.844669739" Jan 22 10:57:51 crc kubenswrapper[4975]: I0122 10:57:51.766458 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.182115 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerStarted","Data":"dc7f7ff6b7fb9498928e7f0f6d28ef89e5844d55c20655ae292eba8598dc939e"} Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.760553 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nqlnc"] Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.762257 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.764535 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.764724 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.764753 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dq6lx" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.770563 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nqlnc"] Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.834662 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpmr\" (UniqueName: \"kubernetes.io/projected/422220a0-7220-4f6a-8d8b-46fa92e6082a-kube-api-access-4vpmr\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.834707 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-scripts\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.834728 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.834781 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-config-data\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.936132 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-config-data\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.936276 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpmr\" (UniqueName: \"kubernetes.io/projected/422220a0-7220-4f6a-8d8b-46fa92e6082a-kube-api-access-4vpmr\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.936304 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-scripts\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.936322 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.941181 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-config-data\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.941680 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-scripts\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.958880 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpmr\" (UniqueName: \"kubernetes.io/projected/422220a0-7220-4f6a-8d8b-46fa92e6082a-kube-api-access-4vpmr\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:52 crc kubenswrapper[4975]: I0122 10:57:52.959017 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nqlnc\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:53 crc kubenswrapper[4975]: I0122 10:57:53.079097 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:57:53 crc kubenswrapper[4975]: I0122 10:57:53.204490 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerStarted","Data":"24b355755cba9a7a07c14bfbe4c63ddc3df43af2cdc90e815e13520239afab77"} Jan 22 10:57:53 crc kubenswrapper[4975]: I0122 10:57:53.537022 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nqlnc"] Jan 22 10:57:54 crc kubenswrapper[4975]: I0122 10:57:54.219412 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" event={"ID":"422220a0-7220-4f6a-8d8b-46fa92e6082a","Type":"ContainerStarted","Data":"d0fa3000bdb08fce88660f9e81e98c37307a438c396b8dd6bd14097e0f9ca4bd"} Jan 22 10:57:54 crc kubenswrapper[4975]: I0122 10:57:54.446606 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:57:54 crc kubenswrapper[4975]: I0122 10:57:54.446645 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:57:54 crc kubenswrapper[4975]: I0122 10:57:54.478449 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:57:54 crc kubenswrapper[4975]: I0122 10:57:54.517186 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232012 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-central-agent" containerID="cri-o://a6a3ada5e7fc835ca309b5f58e961d281216035d73af7e6942fc0d50fa8714cd" gracePeriod=30 Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232110 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerStarted","Data":"a9884073256d7819e77899a8574d0c7c90abd40f4fa986f97a8d6d631fdb3652"} Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232155 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232440 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="sg-core" containerID="cri-o://24b355755cba9a7a07c14bfbe4c63ddc3df43af2cdc90e815e13520239afab77" gracePeriod=30 Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232482 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-notification-agent" containerID="cri-o://dc7f7ff6b7fb9498928e7f0f6d28ef89e5844d55c20655ae292eba8598dc939e" gracePeriod=30 Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232491 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232489 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="proxy-httpd" containerID="cri-o://a9884073256d7819e77899a8574d0c7c90abd40f4fa986f97a8d6d631fdb3652" gracePeriod=30 Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.232520 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:57:55 crc kubenswrapper[4975]: I0122 10:57:55.254220 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.410997594 podStartE2EDuration="7.254200984s" podCreationTimestamp="2026-01-22 10:57:48 +0000 UTC" firstStartedPulling="2026-01-22 10:57:49.152467821 +0000 UTC m=+1261.810503931" lastFinishedPulling="2026-01-22 10:57:53.995671211 +0000 UTC m=+1266.653707321" observedRunningTime="2026-01-22 10:57:55.249757951 +0000 UTC m=+1267.907794061" watchObservedRunningTime="2026-01-22 10:57:55.254200984 +0000 UTC m=+1267.912237094" Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.242325 4975 generic.go:334] "Generic (PLEG): container finished" podID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerID="a9884073256d7819e77899a8574d0c7c90abd40f4fa986f97a8d6d631fdb3652" exitCode=0 Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.242572 4975 generic.go:334] "Generic (PLEG): container finished" podID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerID="24b355755cba9a7a07c14bfbe4c63ddc3df43af2cdc90e815e13520239afab77" exitCode=2 Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.242358 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerDied","Data":"a9884073256d7819e77899a8574d0c7c90abd40f4fa986f97a8d6d631fdb3652"} Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.242617 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerDied","Data":"24b355755cba9a7a07c14bfbe4c63ddc3df43af2cdc90e815e13520239afab77"} Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.242631 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerDied","Data":"dc7f7ff6b7fb9498928e7f0f6d28ef89e5844d55c20655ae292eba8598dc939e"} Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.242580 4975 generic.go:334] "Generic (PLEG): container finished" podID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerID="dc7f7ff6b7fb9498928e7f0f6d28ef89e5844d55c20655ae292eba8598dc939e" exitCode=0 Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.602897 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.603287 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.657860 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:56 crc kubenswrapper[4975]: I0122 10:57:56.658348 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:57 crc kubenswrapper[4975]: I0122 10:57:57.035019 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:57:57 crc kubenswrapper[4975]: I0122 10:57:57.076106 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:57:57 crc kubenswrapper[4975]: I0122 10:57:57.251399 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:57 crc kubenswrapper[4975]: I0122 10:57:57.251475 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:59 crc kubenswrapper[4975]: I0122 10:57:59.566812 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:57:59 crc kubenswrapper[4975]: I0122 10:57:59.567280 4975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:57:59 crc kubenswrapper[4975]: I0122 10:57:59.577469 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:58:00 crc kubenswrapper[4975]: I0122 10:58:00.301753 4975 generic.go:334] "Generic (PLEG): container finished" podID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerID="a6a3ada5e7fc835ca309b5f58e961d281216035d73af7e6942fc0d50fa8714cd" exitCode=0 Jan 22 10:58:00 crc kubenswrapper[4975]: I0122 10:58:00.301791 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerDied","Data":"a6a3ada5e7fc835ca309b5f58e961d281216035d73af7e6942fc0d50fa8714cd"} Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.495413 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.621880 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-run-httpd\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622049 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-log-httpd\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622123 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-sg-core-conf-yaml\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622211 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-combined-ca-bundle\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622298 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-config-data\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622412 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-scripts\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622467 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvhzl\" (UniqueName: \"kubernetes.io/projected/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-kube-api-access-tvhzl\") pod \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\" (UID: \"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd\") " Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622416 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.622488 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.623052 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.623084 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.628820 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-kube-api-access-tvhzl" (OuterVolumeSpecName: "kube-api-access-tvhzl") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "kube-api-access-tvhzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.628833 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-scripts" (OuterVolumeSpecName: "scripts") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.655815 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.719224 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.724354 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.724388 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.724400 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.724409 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvhzl\" (UniqueName: \"kubernetes.io/projected/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-kube-api-access-tvhzl\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.752938 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-config-data" (OuterVolumeSpecName: "config-data") pod "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" (UID: "e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:02 crc kubenswrapper[4975]: I0122 10:58:02.826095 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.349570 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" event={"ID":"422220a0-7220-4f6a-8d8b-46fa92e6082a","Type":"ContainerStarted","Data":"8ee2ac45009c61ee6247b9229c8e5887129712e0d41c80ac13b6f679b29b2925"} Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.355021 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd","Type":"ContainerDied","Data":"2f4d7bd9e2f6cacfa81f2fca6ad9bbdde088f71da5a616cbe92538c08f039295"} Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.355318 4975 scope.go:117] "RemoveContainer" containerID="a9884073256d7819e77899a8574d0c7c90abd40f4fa986f97a8d6d631fdb3652" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.355050 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.388116 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" podStartSLOduration=2.729178014 podStartE2EDuration="11.38808035s" podCreationTimestamp="2026-01-22 10:57:52 +0000 UTC" firstStartedPulling="2026-01-22 10:57:53.545640874 +0000 UTC m=+1266.203676984" lastFinishedPulling="2026-01-22 10:58:02.20454319 +0000 UTC m=+1274.862579320" observedRunningTime="2026-01-22 10:58:03.37838285 +0000 UTC m=+1276.036419040" watchObservedRunningTime="2026-01-22 10:58:03.38808035 +0000 UTC m=+1276.046116510" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.390515 4975 scope.go:117] "RemoveContainer" containerID="24b355755cba9a7a07c14bfbe4c63ddc3df43af2cdc90e815e13520239afab77" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.423886 4975 scope.go:117] "RemoveContainer" containerID="dc7f7ff6b7fb9498928e7f0f6d28ef89e5844d55c20655ae292eba8598dc939e" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.431457 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.442929 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.460242 4975 scope.go:117] "RemoveContainer" containerID="a6a3ada5e7fc835ca309b5f58e961d281216035d73af7e6942fc0d50fa8714cd" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.523747 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:03 crc kubenswrapper[4975]: E0122 10:58:03.524172 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-notification-agent" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524187 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-notification-agent" Jan 22 10:58:03 crc kubenswrapper[4975]: E0122 10:58:03.524237 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-central-agent" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524248 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-central-agent" Jan 22 10:58:03 crc kubenswrapper[4975]: E0122 10:58:03.524264 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="sg-core" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524274 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="sg-core" Jan 22 10:58:03 crc kubenswrapper[4975]: E0122 10:58:03.524287 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="proxy-httpd" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524296 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="proxy-httpd" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524516 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-central-agent" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524537 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="sg-core" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524547 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="ceilometer-notification-agent" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.524566 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" containerName="proxy-httpd" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.537057 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.541658 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.541948 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.548193 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650160 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-run-httpd\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650215 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-log-httpd\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650291 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-scripts\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650349 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650385 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-config-data\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650414 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9j5\" (UniqueName: \"kubernetes.io/projected/a82f5ab8-87b3-4b27-82c6-34739a5afb69-kube-api-access-7l9j5\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.650570 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.752486 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-config-data\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.752575 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9j5\" (UniqueName: \"kubernetes.io/projected/a82f5ab8-87b3-4b27-82c6-34739a5afb69-kube-api-access-7l9j5\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.752639 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.752733 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-run-httpd\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.752785 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-log-httpd\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.752942 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-scripts\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.753016 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.753504 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-run-httpd\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.753905 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-log-httpd\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.756971 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.758049 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.758784 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-config-data\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.759510 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-scripts\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.768699 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd" path="/var/lib/kubelet/pods/e44fdcc8-192c-4ae1-ac5d-e5b4117ef9dd/volumes" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.772848 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9j5\" (UniqueName: \"kubernetes.io/projected/a82f5ab8-87b3-4b27-82c6-34739a5afb69-kube-api-access-7l9j5\") pod \"ceilometer-0\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " pod="openstack/ceilometer-0" Jan 22 10:58:03 crc kubenswrapper[4975]: I0122 10:58:03.957003 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:04 crc kubenswrapper[4975]: W0122 10:58:04.482456 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda82f5ab8_87b3_4b27_82c6_34739a5afb69.slice/crio-3d168047e7c5b8a597a12e517b4208e4f7e2e5ec10c41836609056601e1b03ae WatchSource:0}: Error finding container 3d168047e7c5b8a597a12e517b4208e4f7e2e5ec10c41836609056601e1b03ae: Status 404 returned error can't find the container with id 3d168047e7c5b8a597a12e517b4208e4f7e2e5ec10c41836609056601e1b03ae Jan 22 10:58:04 crc kubenswrapper[4975]: I0122 10:58:04.485626 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:05 crc kubenswrapper[4975]: I0122 10:58:05.371175 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerStarted","Data":"e15de372b37d2164dac28c94d2dd57b55b08e65f5446998722071cc9de410691"} Jan 22 10:58:05 crc kubenswrapper[4975]: I0122 10:58:05.371601 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerStarted","Data":"3d168047e7c5b8a597a12e517b4208e4f7e2e5ec10c41836609056601e1b03ae"} Jan 22 10:58:06 crc kubenswrapper[4975]: I0122 10:58:06.388717 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerStarted","Data":"471fdfa54164a659bb95500a1d670c62ecfa145bd4d406e539b8462740b4f934"} Jan 22 10:58:09 crc kubenswrapper[4975]: I0122 10:58:09.431418 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerStarted","Data":"730e273a1b3cdf04f5b63bd7622bebbc9329b128126263aa48522f6d1912ebeb"} Jan 22 10:58:11 crc kubenswrapper[4975]: I0122 10:58:11.454489 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerStarted","Data":"a7ec4274bbf787cef63addd879fe8aea57f0619daf3d56ff935b80eb70e24079"} Jan 22 10:58:11 crc kubenswrapper[4975]: I0122 10:58:11.454788 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:58:11 crc kubenswrapper[4975]: I0122 10:58:11.476033 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6514711970000002 podStartE2EDuration="8.476012277s" podCreationTimestamp="2026-01-22 10:58:03 +0000 UTC" firstStartedPulling="2026-01-22 10:58:04.486513294 +0000 UTC m=+1277.144549434" lastFinishedPulling="2026-01-22 10:58:10.311054364 +0000 UTC m=+1282.969090514" observedRunningTime="2026-01-22 10:58:11.47286567 +0000 UTC m=+1284.130901800" watchObservedRunningTime="2026-01-22 10:58:11.476012277 +0000 UTC m=+1284.134048387" Jan 22 10:58:18 crc kubenswrapper[4975]: I0122 10:58:18.527559 4975 generic.go:334] "Generic (PLEG): container finished" podID="422220a0-7220-4f6a-8d8b-46fa92e6082a" containerID="8ee2ac45009c61ee6247b9229c8e5887129712e0d41c80ac13b6f679b29b2925" exitCode=0 Jan 22 10:58:18 crc kubenswrapper[4975]: I0122 10:58:18.528831 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" event={"ID":"422220a0-7220-4f6a-8d8b-46fa92e6082a","Type":"ContainerDied","Data":"8ee2ac45009c61ee6247b9229c8e5887129712e0d41c80ac13b6f679b29b2925"} Jan 22 10:58:19 crc kubenswrapper[4975]: I0122 10:58:19.952437 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.098097 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-scripts\") pod \"422220a0-7220-4f6a-8d8b-46fa92e6082a\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.098263 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-combined-ca-bundle\") pod \"422220a0-7220-4f6a-8d8b-46fa92e6082a\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.098359 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-config-data\") pod \"422220a0-7220-4f6a-8d8b-46fa92e6082a\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.098390 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vpmr\" (UniqueName: \"kubernetes.io/projected/422220a0-7220-4f6a-8d8b-46fa92e6082a-kube-api-access-4vpmr\") pod \"422220a0-7220-4f6a-8d8b-46fa92e6082a\" (UID: \"422220a0-7220-4f6a-8d8b-46fa92e6082a\") " Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.104504 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-scripts" (OuterVolumeSpecName: "scripts") pod "422220a0-7220-4f6a-8d8b-46fa92e6082a" (UID: "422220a0-7220-4f6a-8d8b-46fa92e6082a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.105646 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422220a0-7220-4f6a-8d8b-46fa92e6082a-kube-api-access-4vpmr" (OuterVolumeSpecName: "kube-api-access-4vpmr") pod "422220a0-7220-4f6a-8d8b-46fa92e6082a" (UID: "422220a0-7220-4f6a-8d8b-46fa92e6082a"). InnerVolumeSpecName "kube-api-access-4vpmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.143536 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-config-data" (OuterVolumeSpecName: "config-data") pod "422220a0-7220-4f6a-8d8b-46fa92e6082a" (UID: "422220a0-7220-4f6a-8d8b-46fa92e6082a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.149272 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "422220a0-7220-4f6a-8d8b-46fa92e6082a" (UID: "422220a0-7220-4f6a-8d8b-46fa92e6082a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.277061 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.277099 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vpmr\" (UniqueName: \"kubernetes.io/projected/422220a0-7220-4f6a-8d8b-46fa92e6082a-kube-api-access-4vpmr\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.277113 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.277123 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422220a0-7220-4f6a-8d8b-46fa92e6082a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.555556 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" event={"ID":"422220a0-7220-4f6a-8d8b-46fa92e6082a","Type":"ContainerDied","Data":"d0fa3000bdb08fce88660f9e81e98c37307a438c396b8dd6bd14097e0f9ca4bd"} Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.555943 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fa3000bdb08fce88660f9e81e98c37307a438c396b8dd6bd14097e0f9ca4bd" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.555617 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nqlnc" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.728769 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 10:58:20 crc kubenswrapper[4975]: E0122 10:58:20.729095 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422220a0-7220-4f6a-8d8b-46fa92e6082a" containerName="nova-cell0-conductor-db-sync" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.729106 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="422220a0-7220-4f6a-8d8b-46fa92e6082a" containerName="nova-cell0-conductor-db-sync" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.729252 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="422220a0-7220-4f6a-8d8b-46fa92e6082a" containerName="nova-cell0-conductor-db-sync" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.730319 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.734163 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dq6lx" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.740590 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.750072 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.892962 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e829dfc-69fb-4951-b203-53af6945a1e1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.893181 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e829dfc-69fb-4951-b203-53af6945a1e1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.893402 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphmh\" (UniqueName: \"kubernetes.io/projected/9e829dfc-69fb-4951-b203-53af6945a1e1-kube-api-access-jphmh\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.995430 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e829dfc-69fb-4951-b203-53af6945a1e1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.995677 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphmh\" (UniqueName: \"kubernetes.io/projected/9e829dfc-69fb-4951-b203-53af6945a1e1-kube-api-access-jphmh\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:20 crc kubenswrapper[4975]: I0122 10:58:20.995768 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e829dfc-69fb-4951-b203-53af6945a1e1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:21 crc kubenswrapper[4975]: I0122 10:58:21.001304 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e829dfc-69fb-4951-b203-53af6945a1e1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:21 crc kubenswrapper[4975]: I0122 10:58:21.011187 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e829dfc-69fb-4951-b203-53af6945a1e1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:21 crc kubenswrapper[4975]: I0122 10:58:21.025178 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphmh\" (UniqueName: \"kubernetes.io/projected/9e829dfc-69fb-4951-b203-53af6945a1e1-kube-api-access-jphmh\") pod \"nova-cell0-conductor-0\" (UID: \"9e829dfc-69fb-4951-b203-53af6945a1e1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:21 crc kubenswrapper[4975]: I0122 10:58:21.051023 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:21 crc kubenswrapper[4975]: I0122 10:58:21.503901 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 10:58:21 crc kubenswrapper[4975]: W0122 10:58:21.508708 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e829dfc_69fb_4951_b203_53af6945a1e1.slice/crio-235443e2a88a2d256f1d107635c293b7221ff9ca568cbab50fdb5418e431ea2d WatchSource:0}: Error finding container 235443e2a88a2d256f1d107635c293b7221ff9ca568cbab50fdb5418e431ea2d: Status 404 returned error can't find the container with id 235443e2a88a2d256f1d107635c293b7221ff9ca568cbab50fdb5418e431ea2d Jan 22 10:58:21 crc kubenswrapper[4975]: I0122 10:58:21.568225 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9e829dfc-69fb-4951-b203-53af6945a1e1","Type":"ContainerStarted","Data":"235443e2a88a2d256f1d107635c293b7221ff9ca568cbab50fdb5418e431ea2d"} Jan 22 10:58:22 crc kubenswrapper[4975]: I0122 10:58:22.583670 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9e829dfc-69fb-4951-b203-53af6945a1e1","Type":"ContainerStarted","Data":"22e311ca6960c00cb8a1664dabb675e5112265724d101f923c4c76f5aa73b2d2"} Jan 22 10:58:22 crc kubenswrapper[4975]: I0122 10:58:22.584696 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:22 crc kubenswrapper[4975]: I0122 10:58:22.617472 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.61744193 podStartE2EDuration="2.61744193s" podCreationTimestamp="2026-01-22 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:22.610747574 +0000 UTC m=+1295.268783734" watchObservedRunningTime="2026-01-22 10:58:22.61744193 +0000 UTC m=+1295.275478080" Jan 22 10:58:27 crc kubenswrapper[4975]: I0122 10:58:27.852447 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.022056 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4zcnd"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.023994 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.026363 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.026463 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.040453 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zcnd"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.186166 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-config-data\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.186400 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.186458 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6j5\" (UniqueName: \"kubernetes.io/projected/6ef0e4bb-4778-4690-abc6-21c903eb69e3-kube-api-access-tf6j5\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.186490 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-scripts\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.237146 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.239012 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.242222 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.268090 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.284055 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.285518 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.290965 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.291710 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.291754 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6j5\" (UniqueName: \"kubernetes.io/projected/6ef0e4bb-4778-4690-abc6-21c903eb69e3-kube-api-access-tf6j5\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.291772 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-scripts\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.291805 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-config-data\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.298545 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.300888 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-scripts\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.300926 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.328711 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-config-data\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.355499 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6j5\" (UniqueName: \"kubernetes.io/projected/6ef0e4bb-4778-4690-abc6-21c903eb69e3-kube-api-access-tf6j5\") pod \"nova-cell0-cell-mapping-4zcnd\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.388778 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-qztph"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.390233 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.396904 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397281 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-config-data\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397326 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-logs\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397345 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397383 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da2f868-dc94-4a58-9b68-7e4246ba3778-logs\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397402 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397424 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74sl9\" (UniqueName: \"kubernetes.io/projected/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-kube-api-access-74sl9\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397453 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-config-data\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397479 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wg2\" (UniqueName: \"kubernetes.io/projected/4da2f868-dc94-4a58-9b68-7e4246ba3778-kube-api-access-n9wg2\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.397984 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.406674 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.419009 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.473208 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-qztph"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499081 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wg2\" (UniqueName: \"kubernetes.io/projected/4da2f868-dc94-4a58-9b68-7e4246ba3778-kube-api-access-n9wg2\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499137 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499174 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9wpn\" (UniqueName: \"kubernetes.io/projected/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-kube-api-access-q9wpn\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499222 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-svc\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499255 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499274 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-config-data\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499296 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74jk\" (UniqueName: \"kubernetes.io/projected/c4f4b9fe-d65b-4778-a38e-704314e4cf34-kube-api-access-d74jk\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499312 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-logs\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499327 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-config-data\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499349 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499367 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-config\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499441 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499482 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da2f868-dc94-4a58-9b68-7e4246ba3778-logs\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499499 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499519 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74sl9\" (UniqueName: \"kubernetes.io/projected/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-kube-api-access-74sl9\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499549 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-config-data\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.499564 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.506910 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-logs\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.507557 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da2f868-dc94-4a58-9b68-7e4246ba3778-logs\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.508485 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.510994 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.511575 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.514956 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.516872 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.517561 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-config-data\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.522755 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-config-data\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.524950 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74sl9\" (UniqueName: \"kubernetes.io/projected/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-kube-api-access-74sl9\") pod \"nova-api-0\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.528079 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wg2\" (UniqueName: \"kubernetes.io/projected/4da2f868-dc94-4a58-9b68-7e4246ba3778-kube-api-access-n9wg2\") pod \"nova-metadata-0\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.539413 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.559740 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.600899 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-svc\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.600966 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.600989 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74jk\" (UniqueName: \"kubernetes.io/projected/c4f4b9fe-d65b-4778-a38e-704314e4cf34-kube-api-access-d74jk\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601006 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-config-data\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601032 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-config\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601050 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601071 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjwl\" (UniqueName: \"kubernetes.io/projected/6f967745-7b78-4a9d-be66-b94128b04cf2-kube-api-access-zgjwl\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601099 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601132 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601174 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601205 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.601224 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9wpn\" (UniqueName: \"kubernetes.io/projected/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-kube-api-access-q9wpn\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.602373 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.602747 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-svc\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.603003 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-config\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.607970 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.608519 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.611792 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-config-data\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.618859 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74jk\" (UniqueName: \"kubernetes.io/projected/c4f4b9fe-d65b-4778-a38e-704314e4cf34-kube-api-access-d74jk\") pod \"dnsmasq-dns-757b4f8459-qztph\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.623839 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.625598 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9wpn\" (UniqueName: \"kubernetes.io/projected/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-kube-api-access-q9wpn\") pod \"nova-scheduler-0\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.643936 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.705816 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.706036 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjwl\" (UniqueName: \"kubernetes.io/projected/6f967745-7b78-4a9d-be66-b94128b04cf2-kube-api-access-zgjwl\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.706107 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.713802 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.717099 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.730715 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.732838 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjwl\" (UniqueName: \"kubernetes.io/projected/6f967745-7b78-4a9d-be66-b94128b04cf2-kube-api-access-zgjwl\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.751383 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.776019 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:58:30 crc kubenswrapper[4975]: I0122 10:58:30.958094 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.119255 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.277343 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2m6rj"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.279040 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.285077 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.285405 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.301969 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zcnd"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.314840 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2m6rj"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.352419 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-qztph"] Jan 22 10:58:31 crc kubenswrapper[4975]: W0122 10:58:31.361389 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a7a50a0_2077_48c5_8a6d_e0aaa4c43741.slice/crio-c76f0d1736e31e94de3a51dc8080f5ff14524656c130493e8f139aa42b5d9303 WatchSource:0}: Error finding container c76f0d1736e31e94de3a51dc8080f5ff14524656c130493e8f139aa42b5d9303: Status 404 returned error can't find the container with id c76f0d1736e31e94de3a51dc8080f5ff14524656c130493e8f139aa42b5d9303 Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.366013 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.385869 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.430785 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-config-data\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.431558 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvd2s\" (UniqueName: \"kubernetes.io/projected/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-kube-api-access-nvd2s\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.431634 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.432519 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-scripts\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.535715 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-config-data\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.535786 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvd2s\" (UniqueName: \"kubernetes.io/projected/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-kube-api-access-nvd2s\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.535833 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.535881 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-scripts\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.547150 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-config-data\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.549237 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-scripts\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.562262 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvd2s\" (UniqueName: \"kubernetes.io/projected/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-kube-api-access-nvd2s\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.565017 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2m6rj\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.590222 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.699981 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zcnd" event={"ID":"6ef0e4bb-4778-4690-abc6-21c903eb69e3","Type":"ContainerStarted","Data":"928298a29e62cc725f9d6f84b054a0407f4c86450690746034b7781994797911"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.700024 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zcnd" event={"ID":"6ef0e4bb-4778-4690-abc6-21c903eb69e3","Type":"ContainerStarted","Data":"4529562c315c734b41757ebdb47d78f5962e07f456c84b054bb2fd61a2236758"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.702415 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9","Type":"ContainerStarted","Data":"3d9e4fc807412c898ae9807268e86722e46e7c4a181ef5eca8759774ad0358eb"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.703934 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-qztph" event={"ID":"c4f4b9fe-d65b-4778-a38e-704314e4cf34","Type":"ContainerStarted","Data":"a2dbeb93507eb93d8da759d1e3e592609b62231694680194da5239b3cce31cf9"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.704738 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741","Type":"ContainerStarted","Data":"c76f0d1736e31e94de3a51dc8080f5ff14524656c130493e8f139aa42b5d9303"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.705536 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f967745-7b78-4a9d-be66-b94128b04cf2","Type":"ContainerStarted","Data":"c318a3bd21a866c0de49955e2dc0d4bcdaa75257f5d163b763b6967c8938651d"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.711209 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4da2f868-dc94-4a58-9b68-7e4246ba3778","Type":"ContainerStarted","Data":"17e5990378fd38157dbcb6353aedd7e5c5888e46dd86cec12a78c2af313451b6"} Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.726510 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4zcnd" podStartSLOduration=1.726490114 podStartE2EDuration="1.726490114s" podCreationTimestamp="2026-01-22 10:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:31.724661603 +0000 UTC m=+1304.382697713" watchObservedRunningTime="2026-01-22 10:58:31.726490114 +0000 UTC m=+1304.384526234" Jan 22 10:58:31 crc kubenswrapper[4975]: I0122 10:58:31.819266 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:32 crc kubenswrapper[4975]: I0122 10:58:32.289880 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2m6rj"] Jan 22 10:58:32 crc kubenswrapper[4975]: W0122 10:58:32.301079 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39939ecf_15f1_4cdc_a3d5_cf09de7c3b86.slice/crio-ff3e357f88625ef82fbd9782a2162008e3498f13968678217395af941e870d3d WatchSource:0}: Error finding container ff3e357f88625ef82fbd9782a2162008e3498f13968678217395af941e870d3d: Status 404 returned error can't find the container with id ff3e357f88625ef82fbd9782a2162008e3498f13968678217395af941e870d3d Jan 22 10:58:32 crc kubenswrapper[4975]: I0122 10:58:32.730301 4975 generic.go:334] "Generic (PLEG): container finished" podID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerID="49ce5550d00dd33ce67de4e89597335c9e2175203219c9e7cc4480c91c87af89" exitCode=0 Jan 22 10:58:32 crc kubenswrapper[4975]: I0122 10:58:32.730413 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-qztph" event={"ID":"c4f4b9fe-d65b-4778-a38e-704314e4cf34","Type":"ContainerDied","Data":"49ce5550d00dd33ce67de4e89597335c9e2175203219c9e7cc4480c91c87af89"} Jan 22 10:58:32 crc kubenswrapper[4975]: I0122 10:58:32.734581 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" event={"ID":"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86","Type":"ContainerStarted","Data":"fa3abc8565a2f06bff11a63285a48dae0928de32844c130ccfb00840267b15d9"} Jan 22 10:58:32 crc kubenswrapper[4975]: I0122 10:58:32.734617 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" event={"ID":"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86","Type":"ContainerStarted","Data":"ff3e357f88625ef82fbd9782a2162008e3498f13968678217395af941e870d3d"} Jan 22 10:58:32 crc kubenswrapper[4975]: I0122 10:58:32.778645 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" podStartSLOduration=1.778624752 podStartE2EDuration="1.778624752s" podCreationTimestamp="2026-01-22 10:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:32.766789813 +0000 UTC m=+1305.424825923" watchObservedRunningTime="2026-01-22 10:58:32.778624752 +0000 UTC m=+1305.436660882" Jan 22 10:58:33 crc kubenswrapper[4975]: I0122 10:58:33.985991 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 10:58:34 crc kubenswrapper[4975]: I0122 10:58:34.194133 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:34 crc kubenswrapper[4975]: I0122 10:58:34.204051 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.770676 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9","Type":"ContainerStarted","Data":"7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.773037 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-qztph" event={"ID":"c4f4b9fe-d65b-4778-a38e-704314e4cf34","Type":"ContainerStarted","Data":"b3b471e738dd8b620dd03840b6ceb0773c4d3059578a2b75c6359db70197ba79"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.773178 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.775218 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741","Type":"ContainerStarted","Data":"228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.775261 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741","Type":"ContainerStarted","Data":"6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.777215 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f967745-7b78-4a9d-be66-b94128b04cf2","Type":"ContainerStarted","Data":"14a0b0c0595a809a63905ee9879a4981e5d51c008c7ddb6dfaf5c98179b7cf55"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.777295 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6f967745-7b78-4a9d-be66-b94128b04cf2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://14a0b0c0595a809a63905ee9879a4981e5d51c008c7ddb6dfaf5c98179b7cf55" gracePeriod=30 Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.780517 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4da2f868-dc94-4a58-9b68-7e4246ba3778","Type":"ContainerStarted","Data":"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.780570 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4da2f868-dc94-4a58-9b68-7e4246ba3778","Type":"ContainerStarted","Data":"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f"} Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.780739 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-log" containerID="cri-o://fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f" gracePeriod=30 Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.780743 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-metadata" containerID="cri-o://9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3" gracePeriod=30 Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.796666 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.70948507 podStartE2EDuration="6.796619228s" podCreationTimestamp="2026-01-22 10:58:30 +0000 UTC" firstStartedPulling="2026-01-22 10:58:31.387506534 +0000 UTC m=+1304.045542644" lastFinishedPulling="2026-01-22 10:58:35.474640692 +0000 UTC m=+1308.132676802" observedRunningTime="2026-01-22 10:58:36.786777055 +0000 UTC m=+1309.444813175" watchObservedRunningTime="2026-01-22 10:58:36.796619228 +0000 UTC m=+1309.454655348" Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.806234 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-qztph" podStartSLOduration=6.806216806 podStartE2EDuration="6.806216806s" podCreationTimestamp="2026-01-22 10:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:36.804985771 +0000 UTC m=+1309.463021891" watchObservedRunningTime="2026-01-22 10:58:36.806216806 +0000 UTC m=+1309.464252916" Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.831671 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.959856157 podStartE2EDuration="6.831653462s" podCreationTimestamp="2026-01-22 10:58:30 +0000 UTC" firstStartedPulling="2026-01-22 10:58:31.606037117 +0000 UTC m=+1304.264073227" lastFinishedPulling="2026-01-22 10:58:35.477834422 +0000 UTC m=+1308.135870532" observedRunningTime="2026-01-22 10:58:36.822521209 +0000 UTC m=+1309.480557339" watchObservedRunningTime="2026-01-22 10:58:36.831653462 +0000 UTC m=+1309.489689572" Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.844494 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.47738329 podStartE2EDuration="6.844475828s" podCreationTimestamp="2026-01-22 10:58:30 +0000 UTC" firstStartedPulling="2026-01-22 10:58:31.109119478 +0000 UTC m=+1303.767155588" lastFinishedPulling="2026-01-22 10:58:35.476212016 +0000 UTC m=+1308.134248126" observedRunningTime="2026-01-22 10:58:36.84019421 +0000 UTC m=+1309.498230320" watchObservedRunningTime="2026-01-22 10:58:36.844475828 +0000 UTC m=+1309.502511938" Jan 22 10:58:36 crc kubenswrapper[4975]: I0122 10:58:36.865523 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.756747873 podStartE2EDuration="6.865503103s" podCreationTimestamp="2026-01-22 10:58:30 +0000 UTC" firstStartedPulling="2026-01-22 10:58:31.367499227 +0000 UTC m=+1304.025535337" lastFinishedPulling="2026-01-22 10:58:35.476254457 +0000 UTC m=+1308.134290567" observedRunningTime="2026-01-22 10:58:36.856106312 +0000 UTC m=+1309.514142422" watchObservedRunningTime="2026-01-22 10:58:36.865503103 +0000 UTC m=+1309.523539213" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.404307 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.482963 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-config-data\") pod \"4da2f868-dc94-4a58-9b68-7e4246ba3778\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.483013 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wg2\" (UniqueName: \"kubernetes.io/projected/4da2f868-dc94-4a58-9b68-7e4246ba3778-kube-api-access-n9wg2\") pod \"4da2f868-dc94-4a58-9b68-7e4246ba3778\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.483068 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da2f868-dc94-4a58-9b68-7e4246ba3778-logs\") pod \"4da2f868-dc94-4a58-9b68-7e4246ba3778\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.483112 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-combined-ca-bundle\") pod \"4da2f868-dc94-4a58-9b68-7e4246ba3778\" (UID: \"4da2f868-dc94-4a58-9b68-7e4246ba3778\") " Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.483860 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da2f868-dc94-4a58-9b68-7e4246ba3778-logs" (OuterVolumeSpecName: "logs") pod "4da2f868-dc94-4a58-9b68-7e4246ba3778" (UID: "4da2f868-dc94-4a58-9b68-7e4246ba3778"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.502215 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da2f868-dc94-4a58-9b68-7e4246ba3778-kube-api-access-n9wg2" (OuterVolumeSpecName: "kube-api-access-n9wg2") pod "4da2f868-dc94-4a58-9b68-7e4246ba3778" (UID: "4da2f868-dc94-4a58-9b68-7e4246ba3778"). InnerVolumeSpecName "kube-api-access-n9wg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.514065 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-config-data" (OuterVolumeSpecName: "config-data") pod "4da2f868-dc94-4a58-9b68-7e4246ba3778" (UID: "4da2f868-dc94-4a58-9b68-7e4246ba3778"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.533290 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4da2f868-dc94-4a58-9b68-7e4246ba3778" (UID: "4da2f868-dc94-4a58-9b68-7e4246ba3778"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.584993 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.585028 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wg2\" (UniqueName: \"kubernetes.io/projected/4da2f868-dc94-4a58-9b68-7e4246ba3778-kube-api-access-n9wg2\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.585041 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da2f868-dc94-4a58-9b68-7e4246ba3778-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.585052 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da2f868-dc94-4a58-9b68-7e4246ba3778-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791015 4975 generic.go:334] "Generic (PLEG): container finished" podID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerID="9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3" exitCode=0 Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791358 4975 generic.go:334] "Generic (PLEG): container finished" podID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerID="fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f" exitCode=143 Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791397 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4da2f868-dc94-4a58-9b68-7e4246ba3778","Type":"ContainerDied","Data":"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3"} Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791445 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4da2f868-dc94-4a58-9b68-7e4246ba3778","Type":"ContainerDied","Data":"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f"} Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791455 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4da2f868-dc94-4a58-9b68-7e4246ba3778","Type":"ContainerDied","Data":"17e5990378fd38157dbcb6353aedd7e5c5888e46dd86cec12a78c2af313451b6"} Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791470 4975 scope.go:117] "RemoveContainer" containerID="9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.791587 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.826037 4975 scope.go:117] "RemoveContainer" containerID="fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.847199 4975 scope.go:117] "RemoveContainer" containerID="9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3" Jan 22 10:58:37 crc kubenswrapper[4975]: E0122 10:58:37.849160 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3\": container with ID starting with 9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3 not found: ID does not exist" containerID="9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.849198 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3"} err="failed to get container status \"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3\": rpc error: code = NotFound desc = could not find container \"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3\": container with ID starting with 9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3 not found: ID does not exist" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.849222 4975 scope.go:117] "RemoveContainer" containerID="fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f" Jan 22 10:58:37 crc kubenswrapper[4975]: E0122 10:58:37.850744 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f\": container with ID starting with fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f not found: ID does not exist" containerID="fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.850790 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f"} err="failed to get container status \"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f\": rpc error: code = NotFound desc = could not find container \"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f\": container with ID starting with fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f not found: ID does not exist" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.850819 4975 scope.go:117] "RemoveContainer" containerID="9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.851189 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3"} err="failed to get container status \"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3\": rpc error: code = NotFound desc = could not find container \"9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3\": container with ID starting with 9ccd6856246ac9ddd6f272c580d38edea22ee6d826ab015fc1aef36982c43fc3 not found: ID does not exist" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.851247 4975 scope.go:117] "RemoveContainer" containerID="fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.852985 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f"} err="failed to get container status \"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f\": rpc error: code = NotFound desc = could not find container \"fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f\": container with ID starting with fbf130d8d85c8d90158f36c70fee9eaac7f37d91de791bd38d705dc7bda2bf4f not found: ID does not exist" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.853597 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.871526 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.882051 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:37 crc kubenswrapper[4975]: E0122 10:58:37.882575 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-log" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.882600 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-log" Jan 22 10:58:37 crc kubenswrapper[4975]: E0122 10:58:37.882625 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-metadata" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.882635 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-metadata" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.882853 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-log" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.882895 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" containerName="nova-metadata-metadata" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.884168 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.886248 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.889159 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.900066 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.992163 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.992276 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2dz\" (UniqueName: \"kubernetes.io/projected/e6314ed0-edac-4981-974d-7df5e8b020c9-kube-api-access-7b2dz\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.992355 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-config-data\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.992393 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:37 crc kubenswrapper[4975]: I0122 10:58:37.992444 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6314ed0-edac-4981-974d-7df5e8b020c9-logs\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.094434 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2dz\" (UniqueName: \"kubernetes.io/projected/e6314ed0-edac-4981-974d-7df5e8b020c9-kube-api-access-7b2dz\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.094550 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-config-data\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.094604 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.094637 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6314ed0-edac-4981-974d-7df5e8b020c9-logs\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.094722 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.095890 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6314ed0-edac-4981-974d-7df5e8b020c9-logs\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.099824 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.100381 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.111097 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2dz\" (UniqueName: \"kubernetes.io/projected/e6314ed0-edac-4981-974d-7df5e8b020c9-kube-api-access-7b2dz\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.111156 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-config-data\") pod \"nova-metadata-0\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.201382 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.628865 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.629308 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fb31a1a0-e358-436c-b4e3-9bd699096030" containerName="kube-state-metrics" containerID="cri-o://5dbda683ca515651b23b2d384cfe18ce78217bb970e3ccd5cfcf3df79057f044" gracePeriod=30 Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.640000 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.807847 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6314ed0-edac-4981-974d-7df5e8b020c9","Type":"ContainerStarted","Data":"a147320e6257b9c7df7686e1318cf1ee0fa59d1b630f3f200b39de26e89561da"} Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.809467 4975 generic.go:334] "Generic (PLEG): container finished" podID="fb31a1a0-e358-436c-b4e3-9bd699096030" containerID="5dbda683ca515651b23b2d384cfe18ce78217bb970e3ccd5cfcf3df79057f044" exitCode=2 Jan 22 10:58:38 crc kubenswrapper[4975]: I0122 10:58:38.809504 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fb31a1a0-e358-436c-b4e3-9bd699096030","Type":"ContainerDied","Data":"5dbda683ca515651b23b2d384cfe18ce78217bb970e3ccd5cfcf3df79057f044"} Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.122756 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.213711 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7b75\" (UniqueName: \"kubernetes.io/projected/fb31a1a0-e358-436c-b4e3-9bd699096030-kube-api-access-g7b75\") pod \"fb31a1a0-e358-436c-b4e3-9bd699096030\" (UID: \"fb31a1a0-e358-436c-b4e3-9bd699096030\") " Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.219062 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb31a1a0-e358-436c-b4e3-9bd699096030-kube-api-access-g7b75" (OuterVolumeSpecName: "kube-api-access-g7b75") pod "fb31a1a0-e358-436c-b4e3-9bd699096030" (UID: "fb31a1a0-e358-436c-b4e3-9bd699096030"). InnerVolumeSpecName "kube-api-access-g7b75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.315519 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7b75\" (UniqueName: \"kubernetes.io/projected/fb31a1a0-e358-436c-b4e3-9bd699096030-kube-api-access-g7b75\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.770049 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da2f868-dc94-4a58-9b68-7e4246ba3778" path="/var/lib/kubelet/pods/4da2f868-dc94-4a58-9b68-7e4246ba3778/volumes" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.822595 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6314ed0-edac-4981-974d-7df5e8b020c9","Type":"ContainerStarted","Data":"adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb"} Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.822640 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6314ed0-edac-4981-974d-7df5e8b020c9","Type":"ContainerStarted","Data":"1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05"} Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.829609 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fb31a1a0-e358-436c-b4e3-9bd699096030","Type":"ContainerDied","Data":"f47c08f36970e6857ecd5b1b2f749be259415890c1cdb75a7ac8117163c56fdd"} Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.829673 4975 scope.go:117] "RemoveContainer" containerID="5dbda683ca515651b23b2d384cfe18ce78217bb970e3ccd5cfcf3df79057f044" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.829860 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.845297 4975 generic.go:334] "Generic (PLEG): container finished" podID="6ef0e4bb-4778-4690-abc6-21c903eb69e3" containerID="928298a29e62cc725f9d6f84b054a0407f4c86450690746034b7781994797911" exitCode=0 Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.845369 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zcnd" event={"ID":"6ef0e4bb-4778-4690-abc6-21c903eb69e3","Type":"ContainerDied","Data":"928298a29e62cc725f9d6f84b054a0407f4c86450690746034b7781994797911"} Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.849052 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.849032493 podStartE2EDuration="2.849032493s" podCreationTimestamp="2026-01-22 10:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:39.84782813 +0000 UTC m=+1312.505864250" watchObservedRunningTime="2026-01-22 10:58:39.849032493 +0000 UTC m=+1312.507068603" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.901820 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.914348 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.922464 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:58:39 crc kubenswrapper[4975]: E0122 10:58:39.923023 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb31a1a0-e358-436c-b4e3-9bd699096030" containerName="kube-state-metrics" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.923042 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb31a1a0-e358-436c-b4e3-9bd699096030" containerName="kube-state-metrics" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.923212 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb31a1a0-e358-436c-b4e3-9bd699096030" containerName="kube-state-metrics" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.923780 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.926150 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.926388 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 10:58:39 crc kubenswrapper[4975]: I0122 10:58:39.942251 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.027700 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjqp\" (UniqueName: \"kubernetes.io/projected/9f501c78-c779-4060-9262-dbbe23739de6-kube-api-access-hpjqp\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.027997 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.028118 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.028215 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.129670 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjqp\" (UniqueName: \"kubernetes.io/projected/9f501c78-c779-4060-9262-dbbe23739de6-kube-api-access-hpjqp\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.129760 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.130542 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.130597 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.135815 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.135857 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.135975 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f501c78-c779-4060-9262-dbbe23739de6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.151542 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjqp\" (UniqueName: \"kubernetes.io/projected/9f501c78-c779-4060-9262-dbbe23739de6-kube-api-access-hpjqp\") pod \"kube-state-metrics-0\" (UID: \"9f501c78-c779-4060-9262-dbbe23739de6\") " pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.239168 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.449512 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.450905 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-central-agent" containerID="cri-o://e15de372b37d2164dac28c94d2dd57b55b08e65f5446998722071cc9de410691" gracePeriod=30 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.451042 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="proxy-httpd" containerID="cri-o://a7ec4274bbf787cef63addd879fe8aea57f0619daf3d56ff935b80eb70e24079" gracePeriod=30 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.451077 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="sg-core" containerID="cri-o://730e273a1b3cdf04f5b63bd7622bebbc9329b128126263aa48522f6d1912ebeb" gracePeriod=30 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.451107 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-notification-agent" containerID="cri-o://471fdfa54164a659bb95500a1d670c62ecfa145bd4d406e539b8462740b4f934" gracePeriod=30 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.731816 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.731876 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.741607 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:58:40 crc kubenswrapper[4975]: W0122 10:58:40.750170 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f501c78_c779_4060_9262_dbbe23739de6.slice/crio-5cbad2ec4d3f5d3c90098d4aeb02a9c2bf08c375233a69c0d26967ca62368d02 WatchSource:0}: Error finding container 5cbad2ec4d3f5d3c90098d4aeb02a9c2bf08c375233a69c0d26967ca62368d02: Status 404 returned error can't find the container with id 5cbad2ec4d3f5d3c90098d4aeb02a9c2bf08c375233a69c0d26967ca62368d02 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.752474 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.776580 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.776653 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.819460 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.841107 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vm7w6"] Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.841425 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" podUID="774f6271-a856-40ae-a082-f27ef97f7582" containerName="dnsmasq-dns" containerID="cri-o://0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9" gracePeriod=10 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.930381 4975 generic.go:334] "Generic (PLEG): container finished" podID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerID="a7ec4274bbf787cef63addd879fe8aea57f0619daf3d56ff935b80eb70e24079" exitCode=0 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.930621 4975 generic.go:334] "Generic (PLEG): container finished" podID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerID="730e273a1b3cdf04f5b63bd7622bebbc9329b128126263aa48522f6d1912ebeb" exitCode=2 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.930629 4975 generic.go:334] "Generic (PLEG): container finished" podID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerID="e15de372b37d2164dac28c94d2dd57b55b08e65f5446998722071cc9de410691" exitCode=0 Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.930446 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerDied","Data":"a7ec4274bbf787cef63addd879fe8aea57f0619daf3d56ff935b80eb70e24079"} Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.930693 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerDied","Data":"730e273a1b3cdf04f5b63bd7622bebbc9329b128126263aa48522f6d1912ebeb"} Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.930709 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerDied","Data":"e15de372b37d2164dac28c94d2dd57b55b08e65f5446998722071cc9de410691"} Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.933258 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f501c78-c779-4060-9262-dbbe23739de6","Type":"ContainerStarted","Data":"5cbad2ec4d3f5d3c90098d4aeb02a9c2bf08c375233a69c0d26967ca62368d02"} Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.958685 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:58:40 crc kubenswrapper[4975]: I0122 10:58:40.975146 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.342997 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.444548 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.468397 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-combined-ca-bundle\") pod \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.468808 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf6j5\" (UniqueName: \"kubernetes.io/projected/6ef0e4bb-4778-4690-abc6-21c903eb69e3-kube-api-access-tf6j5\") pod \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.469036 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-scripts\") pod \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.469159 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-config-data\") pod \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\" (UID: \"6ef0e4bb-4778-4690-abc6-21c903eb69e3\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.477785 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef0e4bb-4778-4690-abc6-21c903eb69e3-kube-api-access-tf6j5" (OuterVolumeSpecName: "kube-api-access-tf6j5") pod "6ef0e4bb-4778-4690-abc6-21c903eb69e3" (UID: "6ef0e4bb-4778-4690-abc6-21c903eb69e3"). InnerVolumeSpecName "kube-api-access-tf6j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.477856 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-scripts" (OuterVolumeSpecName: "scripts") pod "6ef0e4bb-4778-4690-abc6-21c903eb69e3" (UID: "6ef0e4bb-4778-4690-abc6-21c903eb69e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.503804 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ef0e4bb-4778-4690-abc6-21c903eb69e3" (UID: "6ef0e4bb-4778-4690-abc6-21c903eb69e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.527631 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-config-data" (OuterVolumeSpecName: "config-data") pod "6ef0e4bb-4778-4690-abc6-21c903eb69e3" (UID: "6ef0e4bb-4778-4690-abc6-21c903eb69e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.570213 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-sb\") pod \"774f6271-a856-40ae-a082-f27ef97f7582\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.570410 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-swift-storage-0\") pod \"774f6271-a856-40ae-a082-f27ef97f7582\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.570507 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gtqn\" (UniqueName: \"kubernetes.io/projected/774f6271-a856-40ae-a082-f27ef97f7582-kube-api-access-8gtqn\") pod \"774f6271-a856-40ae-a082-f27ef97f7582\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.570587 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-nb\") pod \"774f6271-a856-40ae-a082-f27ef97f7582\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.570621 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-config\") pod \"774f6271-a856-40ae-a082-f27ef97f7582\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.570647 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-svc\") pod \"774f6271-a856-40ae-a082-f27ef97f7582\" (UID: \"774f6271-a856-40ae-a082-f27ef97f7582\") " Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.571177 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf6j5\" (UniqueName: \"kubernetes.io/projected/6ef0e4bb-4778-4690-abc6-21c903eb69e3-kube-api-access-tf6j5\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.571193 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.571206 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.571219 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef0e4bb-4778-4690-abc6-21c903eb69e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.588542 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774f6271-a856-40ae-a082-f27ef97f7582-kube-api-access-8gtqn" (OuterVolumeSpecName: "kube-api-access-8gtqn") pod "774f6271-a856-40ae-a082-f27ef97f7582" (UID: "774f6271-a856-40ae-a082-f27ef97f7582"). InnerVolumeSpecName "kube-api-access-8gtqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.663733 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "774f6271-a856-40ae-a082-f27ef97f7582" (UID: "774f6271-a856-40ae-a082-f27ef97f7582"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.673243 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.673267 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gtqn\" (UniqueName: \"kubernetes.io/projected/774f6271-a856-40ae-a082-f27ef97f7582-kube-api-access-8gtqn\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.676920 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "774f6271-a856-40ae-a082-f27ef97f7582" (UID: "774f6271-a856-40ae-a082-f27ef97f7582"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.682825 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "774f6271-a856-40ae-a082-f27ef97f7582" (UID: "774f6271-a856-40ae-a082-f27ef97f7582"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.696855 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "774f6271-a856-40ae-a082-f27ef97f7582" (UID: "774f6271-a856-40ae-a082-f27ef97f7582"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.717791 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-config" (OuterVolumeSpecName: "config") pod "774f6271-a856-40ae-a082-f27ef97f7582" (UID: "774f6271-a856-40ae-a082-f27ef97f7582"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.764221 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb31a1a0-e358-436c-b4e3-9bd699096030" path="/var/lib/kubelet/pods/fb31a1a0-e358-436c-b4e3-9bd699096030/volumes" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.775772 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.775802 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.775813 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.775828 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/774f6271-a856-40ae-a082-f27ef97f7582-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.813772 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.814121 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.945803 4975 generic.go:334] "Generic (PLEG): container finished" podID="39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" containerID="fa3abc8565a2f06bff11a63285a48dae0928de32844c130ccfb00840267b15d9" exitCode=0 Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.945858 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" event={"ID":"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86","Type":"ContainerDied","Data":"fa3abc8565a2f06bff11a63285a48dae0928de32844c130ccfb00840267b15d9"} Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.950721 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f501c78-c779-4060-9262-dbbe23739de6","Type":"ContainerStarted","Data":"4b8f2eaf9df1ddfe4222cb3decae2d76f81fe512e7aa8e9994e4d8f73277645d"} Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.950767 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.953672 4975 generic.go:334] "Generic (PLEG): container finished" podID="774f6271-a856-40ae-a082-f27ef97f7582" containerID="0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9" exitCode=0 Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.953743 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.953759 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" event={"ID":"774f6271-a856-40ae-a082-f27ef97f7582","Type":"ContainerDied","Data":"0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9"} Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.953796 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vm7w6" event={"ID":"774f6271-a856-40ae-a082-f27ef97f7582","Type":"ContainerDied","Data":"3e3fe37cbe12a6f8e4616941367b5231d26e58a3466d9aeef735d120ec1c9fd7"} Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.953818 4975 scope.go:117] "RemoveContainer" containerID="0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.969760 4975 generic.go:334] "Generic (PLEG): container finished" podID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerID="471fdfa54164a659bb95500a1d670c62ecfa145bd4d406e539b8462740b4f934" exitCode=0 Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.969889 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerDied","Data":"471fdfa54164a659bb95500a1d670c62ecfa145bd4d406e539b8462740b4f934"} Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.975918 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zcnd" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.976049 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zcnd" event={"ID":"6ef0e4bb-4778-4690-abc6-21c903eb69e3","Type":"ContainerDied","Data":"4529562c315c734b41757ebdb47d78f5962e07f456c84b054bb2fd61a2236758"} Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.976090 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4529562c315c734b41757ebdb47d78f5962e07f456c84b054bb2fd61a2236758" Jan 22 10:58:41 crc kubenswrapper[4975]: I0122 10:58:41.987853 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.616082779 podStartE2EDuration="2.987833299s" podCreationTimestamp="2026-01-22 10:58:39 +0000 UTC" firstStartedPulling="2026-01-22 10:58:40.753159409 +0000 UTC m=+1313.411195519" lastFinishedPulling="2026-01-22 10:58:41.124909929 +0000 UTC m=+1313.782946039" observedRunningTime="2026-01-22 10:58:41.983934001 +0000 UTC m=+1314.641970131" watchObservedRunningTime="2026-01-22 10:58:41.987833299 +0000 UTC m=+1314.645869409" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.025177 4975 scope.go:117] "RemoveContainer" containerID="5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.039446 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vm7w6"] Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.049847 4975 scope.go:117] "RemoveContainer" containerID="0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9" Jan 22 10:58:42 crc kubenswrapper[4975]: E0122 10:58:42.050211 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9\": container with ID starting with 0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9 not found: ID does not exist" containerID="0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.050265 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9"} err="failed to get container status \"0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9\": rpc error: code = NotFound desc = could not find container \"0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9\": container with ID starting with 0d64ac9f578567bac5838ee988aea4b98bccd30faeb6299d19642ac86b4c00d9 not found: ID does not exist" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.050295 4975 scope.go:117] "RemoveContainer" containerID="5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662" Jan 22 10:58:42 crc kubenswrapper[4975]: E0122 10:58:42.050615 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662\": container with ID starting with 5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662 not found: ID does not exist" containerID="5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.050649 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662"} err="failed to get container status \"5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662\": rpc error: code = NotFound desc = could not find container \"5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662\": container with ID starting with 5ca14f5603600cbab2785d4a4f1de2e32e97b720c81343f35cb84cbe3148f662 not found: ID does not exist" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.052384 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vm7w6"] Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.130472 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.130720 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-log" containerID="cri-o://6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f" gracePeriod=30 Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.131136 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-api" containerID="cri-o://228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b" gracePeriod=30 Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.147906 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.208589 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.208853 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-log" containerID="cri-o://1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05" gracePeriod=30 Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.209320 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-metadata" containerID="cri-o://adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb" gracePeriod=30 Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.280578 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403329 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-sg-core-conf-yaml\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403670 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-log-httpd\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403697 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-config-data\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403830 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-combined-ca-bundle\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403855 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l9j5\" (UniqueName: \"kubernetes.io/projected/a82f5ab8-87b3-4b27-82c6-34739a5afb69-kube-api-access-7l9j5\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403909 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-run-httpd\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.403947 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-scripts\") pod \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\" (UID: \"a82f5ab8-87b3-4b27-82c6-34739a5afb69\") " Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.404106 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.404880 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.407956 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.413000 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82f5ab8-87b3-4b27-82c6-34739a5afb69-kube-api-access-7l9j5" (OuterVolumeSpecName: "kube-api-access-7l9j5") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "kube-api-access-7l9j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.415539 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-scripts" (OuterVolumeSpecName: "scripts") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.486473 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.506674 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l9j5\" (UniqueName: \"kubernetes.io/projected/a82f5ab8-87b3-4b27-82c6-34739a5afb69-kube-api-access-7l9j5\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.506710 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a82f5ab8-87b3-4b27-82c6-34739a5afb69-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.506723 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.506735 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.511612 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.545947 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-config-data" (OuterVolumeSpecName: "config-data") pod "a82f5ab8-87b3-4b27-82c6-34739a5afb69" (UID: "a82f5ab8-87b3-4b27-82c6-34739a5afb69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.615878 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.615909 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a82f5ab8-87b3-4b27-82c6-34739a5afb69-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.984727 4975 generic.go:334] "Generic (PLEG): container finished" podID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerID="1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05" exitCode=143 Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.984792 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6314ed0-edac-4981-974d-7df5e8b020c9","Type":"ContainerDied","Data":"1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05"} Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.986759 4975 generic.go:334] "Generic (PLEG): container finished" podID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerID="6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f" exitCode=143 Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.986790 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741","Type":"ContainerDied","Data":"6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f"} Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.990420 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a82f5ab8-87b3-4b27-82c6-34739a5afb69","Type":"ContainerDied","Data":"3d168047e7c5b8a597a12e517b4208e4f7e2e5ec10c41836609056601e1b03ae"} Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.990459 4975 scope.go:117] "RemoveContainer" containerID="a7ec4274bbf787cef63addd879fe8aea57f0619daf3d56ff935b80eb70e24079" Jan 22 10:58:42 crc kubenswrapper[4975]: I0122 10:58:42.990512 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.016916 4975 scope.go:117] "RemoveContainer" containerID="730e273a1b3cdf04f5b63bd7622bebbc9329b128126263aa48522f6d1912ebeb" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.033031 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.042650 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.052478 4975 scope.go:117] "RemoveContainer" containerID="471fdfa54164a659bb95500a1d670c62ecfa145bd4d406e539b8462740b4f934" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.067446 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.067902 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774f6271-a856-40ae-a082-f27ef97f7582" containerName="dnsmasq-dns" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.067917 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="774f6271-a856-40ae-a082-f27ef97f7582" containerName="dnsmasq-dns" Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.067935 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-central-agent" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.067945 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-central-agent" Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.067963 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-notification-agent" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.067973 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-notification-agent" Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.068000 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="proxy-httpd" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068008 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="proxy-httpd" Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.068018 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774f6271-a856-40ae-a082-f27ef97f7582" containerName="init" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068025 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="774f6271-a856-40ae-a082-f27ef97f7582" containerName="init" Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.068037 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="sg-core" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068045 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="sg-core" Jan 22 10:58:43 crc kubenswrapper[4975]: E0122 10:58:43.068067 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef0e4bb-4778-4690-abc6-21c903eb69e3" containerName="nova-manage" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068075 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef0e4bb-4778-4690-abc6-21c903eb69e3" containerName="nova-manage" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068295 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="proxy-httpd" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068317 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="774f6271-a856-40ae-a082-f27ef97f7582" containerName="dnsmasq-dns" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068353 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="sg-core" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068370 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-central-agent" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068393 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef0e4bb-4778-4690-abc6-21c903eb69e3" containerName="nova-manage" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.068417 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" containerName="ceilometer-notification-agent" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.070458 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.074622 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.074871 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.075039 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.079116 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.134730 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-run-httpd\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.134819 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.134892 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8xz\" (UniqueName: \"kubernetes.io/projected/34de56a7-2ba9-4841-9489-2c0e89be46dd-kube-api-access-wp8xz\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.134937 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-log-httpd\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.134979 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.135031 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-scripts\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.135077 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-config-data\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.135188 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.138350 4975 scope.go:117] "RemoveContainer" containerID="e15de372b37d2164dac28c94d2dd57b55b08e65f5446998722071cc9de410691" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.202533 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.202583 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.236989 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237052 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-run-httpd\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237104 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237166 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8xz\" (UniqueName: \"kubernetes.io/projected/34de56a7-2ba9-4841-9489-2c0e89be46dd-kube-api-access-wp8xz\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237221 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-log-httpd\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237255 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237297 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-scripts\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.237356 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-config-data\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.239272 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-run-httpd\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.240584 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-log-httpd\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.245917 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.247337 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.248393 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-config-data\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.249403 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-scripts\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.249761 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.259146 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8xz\" (UniqueName: \"kubernetes.io/projected/34de56a7-2ba9-4841-9489-2c0e89be46dd-kube-api-access-wp8xz\") pod \"ceilometer-0\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.434512 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.446458 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.546144 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-scripts\") pod \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.546491 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-config-data\") pod \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.546525 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvd2s\" (UniqueName: \"kubernetes.io/projected/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-kube-api-access-nvd2s\") pod \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.546660 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-combined-ca-bundle\") pod \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\" (UID: \"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.558631 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-scripts" (OuterVolumeSpecName: "scripts") pod "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" (UID: "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.558754 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-kube-api-access-nvd2s" (OuterVolumeSpecName: "kube-api-access-nvd2s") pod "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" (UID: "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86"). InnerVolumeSpecName "kube-api-access-nvd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.621031 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" (UID: "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.621119 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-config-data" (OuterVolumeSpecName: "config-data") pod "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" (UID: "39939ecf-15f1-4cdc-a3d5-cf09de7c3b86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.650143 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.650184 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.650196 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvd2s\" (UniqueName: \"kubernetes.io/projected/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-kube-api-access-nvd2s\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.650209 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.720955 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.766488 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774f6271-a856-40ae-a082-f27ef97f7582" path="/var/lib/kubelet/pods/774f6271-a856-40ae-a082-f27ef97f7582/volumes" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.767342 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82f5ab8-87b3-4b27-82c6-34739a5afb69" path="/var/lib/kubelet/pods/a82f5ab8-87b3-4b27-82c6-34739a5afb69/volumes" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.853508 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-nova-metadata-tls-certs\") pod \"e6314ed0-edac-4981-974d-7df5e8b020c9\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.853562 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-combined-ca-bundle\") pod \"e6314ed0-edac-4981-974d-7df5e8b020c9\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.853649 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-config-data\") pod \"e6314ed0-edac-4981-974d-7df5e8b020c9\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.853705 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6314ed0-edac-4981-974d-7df5e8b020c9-logs\") pod \"e6314ed0-edac-4981-974d-7df5e8b020c9\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.853749 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b2dz\" (UniqueName: \"kubernetes.io/projected/e6314ed0-edac-4981-974d-7df5e8b020c9-kube-api-access-7b2dz\") pod \"e6314ed0-edac-4981-974d-7df5e8b020c9\" (UID: \"e6314ed0-edac-4981-974d-7df5e8b020c9\") " Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.854153 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6314ed0-edac-4981-974d-7df5e8b020c9-logs" (OuterVolumeSpecName: "logs") pod "e6314ed0-edac-4981-974d-7df5e8b020c9" (UID: "e6314ed0-edac-4981-974d-7df5e8b020c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.854522 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6314ed0-edac-4981-974d-7df5e8b020c9-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.856836 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6314ed0-edac-4981-974d-7df5e8b020c9-kube-api-access-7b2dz" (OuterVolumeSpecName: "kube-api-access-7b2dz") pod "e6314ed0-edac-4981-974d-7df5e8b020c9" (UID: "e6314ed0-edac-4981-974d-7df5e8b020c9"). InnerVolumeSpecName "kube-api-access-7b2dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.878175 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-config-data" (OuterVolumeSpecName: "config-data") pod "e6314ed0-edac-4981-974d-7df5e8b020c9" (UID: "e6314ed0-edac-4981-974d-7df5e8b020c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.878304 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6314ed0-edac-4981-974d-7df5e8b020c9" (UID: "e6314ed0-edac-4981-974d-7df5e8b020c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.905477 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e6314ed0-edac-4981-974d-7df5e8b020c9" (UID: "e6314ed0-edac-4981-974d-7df5e8b020c9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.958948 4975 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.958983 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.958996 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6314ed0-edac-4981-974d-7df5e8b020c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.959006 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b2dz\" (UniqueName: \"kubernetes.io/projected/e6314ed0-edac-4981-974d-7df5e8b020c9-kube-api-access-7b2dz\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:43 crc kubenswrapper[4975]: I0122 10:58:43.961760 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.002982 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.003215 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2m6rj" event={"ID":"39939ecf-15f1-4cdc-a3d5-cf09de7c3b86","Type":"ContainerDied","Data":"ff3e357f88625ef82fbd9782a2162008e3498f13968678217395af941e870d3d"} Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.003284 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3e357f88625ef82fbd9782a2162008e3498f13968678217395af941e870d3d" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.007066 4975 generic.go:334] "Generic (PLEG): container finished" podID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerID="adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb" exitCode=0 Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.007125 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.007153 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6314ed0-edac-4981-974d-7df5e8b020c9","Type":"ContainerDied","Data":"adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb"} Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.007185 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6314ed0-edac-4981-974d-7df5e8b020c9","Type":"ContainerDied","Data":"a147320e6257b9c7df7686e1318cf1ee0fa59d1b630f3f200b39de26e89561da"} Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.007201 4975 scope.go:117] "RemoveContainer" containerID="adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.008784 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerStarted","Data":"1055b2f92ba80bfaadf5c1f684d3c30aa520fadcdf83571ca314d967a7e63c90"} Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.014571 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" containerName="nova-scheduler-scheduler" containerID="cri-o://7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" gracePeriod=30 Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.055028 4975 scope.go:117] "RemoveContainer" containerID="1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.070162 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: E0122 10:58:44.070775 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-metadata" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.070790 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-metadata" Jan 22 10:58:44 crc kubenswrapper[4975]: E0122 10:58:44.070819 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" containerName="nova-cell1-conductor-db-sync" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.070825 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" containerName="nova-cell1-conductor-db-sync" Jan 22 10:58:44 crc kubenswrapper[4975]: E0122 10:58:44.070834 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-log" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.070843 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-log" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.071310 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-metadata" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.071376 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" containerName="nova-cell1-conductor-db-sync" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.071405 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" containerName="nova-metadata-log" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.072164 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.073832 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.082652 4975 scope.go:117] "RemoveContainer" containerID="adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb" Jan 22 10:58:44 crc kubenswrapper[4975]: E0122 10:58:44.083287 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb\": container with ID starting with adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb not found: ID does not exist" containerID="adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.083376 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb"} err="failed to get container status \"adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb\": rpc error: code = NotFound desc = could not find container \"adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb\": container with ID starting with adc15227a692ba5b6fcf961c6a76009e4e02b8d5813b660fe92ba01ffc54eceb not found: ID does not exist" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.083407 4975 scope.go:117] "RemoveContainer" containerID="1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05" Jan 22 10:58:44 crc kubenswrapper[4975]: E0122 10:58:44.083823 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05\": container with ID starting with 1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05 not found: ID does not exist" containerID="1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.083851 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05"} err="failed to get container status \"1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05\": rpc error: code = NotFound desc = could not find container \"1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05\": container with ID starting with 1320cd15571a555daee77434badbf7c0a2b3360f151cab7e7302c3b2db1efe05 not found: ID does not exist" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.091841 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.134760 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.145805 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.165695 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c07c882-c139-44d5-be3c-79a3a02cab8a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.165818 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlr2\" (UniqueName: \"kubernetes.io/projected/8c07c882-c139-44d5-be3c-79a3a02cab8a-kube-api-access-6hlr2\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.165958 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c07c882-c139-44d5-be3c-79a3a02cab8a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.171320 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.173002 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.174666 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.175078 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.183533 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268115 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-config-data\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268231 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c07c882-c139-44d5-be3c-79a3a02cab8a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268293 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e481a916-5eac-4504-b63b-fbef7f1ee1fe-logs\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268320 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268391 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hlr2\" (UniqueName: \"kubernetes.io/projected/8c07c882-c139-44d5-be3c-79a3a02cab8a-kube-api-access-6hlr2\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268434 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268462 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c6qh\" (UniqueName: \"kubernetes.io/projected/e481a916-5eac-4504-b63b-fbef7f1ee1fe-kube-api-access-6c6qh\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.268524 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c07c882-c139-44d5-be3c-79a3a02cab8a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.273429 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c07c882-c139-44d5-be3c-79a3a02cab8a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.273462 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c07c882-c139-44d5-be3c-79a3a02cab8a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.293740 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hlr2\" (UniqueName: \"kubernetes.io/projected/8c07c882-c139-44d5-be3c-79a3a02cab8a-kube-api-access-6hlr2\") pod \"nova-cell1-conductor-0\" (UID: \"8c07c882-c139-44d5-be3c-79a3a02cab8a\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.384843 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e481a916-5eac-4504-b63b-fbef7f1ee1fe-logs\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.384923 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.385045 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.385090 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c6qh\" (UniqueName: \"kubernetes.io/projected/e481a916-5eac-4504-b63b-fbef7f1ee1fe-kube-api-access-6c6qh\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.385252 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-config-data\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.390590 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.391509 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e481a916-5eac-4504-b63b-fbef7f1ee1fe-logs\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.397418 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-config-data\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.403543 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.412564 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.415169 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c6qh\" (UniqueName: \"kubernetes.io/projected/e481a916-5eac-4504-b63b-fbef7f1ee1fe-kube-api-access-6c6qh\") pod \"nova-metadata-0\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.491998 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:58:44 crc kubenswrapper[4975]: I0122 10:58:44.739784 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.028272 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerStarted","Data":"5f920509d608cadcbe4726fdc850f379eb721d1f83b98ff61b1b2634412e44ca"} Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.031022 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c07c882-c139-44d5-be3c-79a3a02cab8a","Type":"ContainerStarted","Data":"d066cd5c4695e983b7edb7dd9ad4f1d2349a09b5510fc2bc88082ef06f477e4d"} Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.031047 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c07c882-c139-44d5-be3c-79a3a02cab8a","Type":"ContainerStarted","Data":"d338635093945fc60e396e4499a63aa56f67d5583a0626b71cd28b6b0c9ce629"} Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.032001 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.035537 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.048808 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.048795561 podStartE2EDuration="1.048795561s" podCreationTimestamp="2026-01-22 10:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:45.045586642 +0000 UTC m=+1317.703622752" watchObservedRunningTime="2026-01-22 10:58:45.048795561 +0000 UTC m=+1317.706831671" Jan 22 10:58:45 crc kubenswrapper[4975]: I0122 10:58:45.780967 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6314ed0-edac-4981-974d-7df5e8b020c9" path="/var/lib/kubelet/pods/e6314ed0-edac-4981-974d-7df5e8b020c9/volumes" Jan 22 10:58:45 crc kubenswrapper[4975]: E0122 10:58:45.787587 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:58:45 crc kubenswrapper[4975]: E0122 10:58:45.789145 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:58:45 crc kubenswrapper[4975]: E0122 10:58:45.790563 4975 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:58:45 crc kubenswrapper[4975]: E0122 10:58:45.790626 4975 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" containerName="nova-scheduler-scheduler" Jan 22 10:58:46 crc kubenswrapper[4975]: I0122 10:58:46.045817 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerStarted","Data":"2ea48f7fb99a6e6288369ef9d1818d9cd8492e4864975bbdaee103aaf373bf51"} Jan 22 10:58:46 crc kubenswrapper[4975]: I0122 10:58:46.048432 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e481a916-5eac-4504-b63b-fbef7f1ee1fe","Type":"ContainerStarted","Data":"6ee380a46ff83a4ec86931eeeeda60756dcedd49841e71f1f7e81c860253ebdb"} Jan 22 10:58:46 crc kubenswrapper[4975]: I0122 10:58:46.048457 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e481a916-5eac-4504-b63b-fbef7f1ee1fe","Type":"ContainerStarted","Data":"6b5004b2b9c2f7b5da1a6880837e4a4096659d38403f2eff4dc6d476cd3d3a19"} Jan 22 10:58:46 crc kubenswrapper[4975]: I0122 10:58:46.048469 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e481a916-5eac-4504-b63b-fbef7f1ee1fe","Type":"ContainerStarted","Data":"085d8e8dc803f4b5b793d3e32186b6412c07225f827a2fb093f4c22fd4a7046a"} Jan 22 10:58:46 crc kubenswrapper[4975]: I0122 10:58:46.095255 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.095234351 podStartE2EDuration="2.095234351s" podCreationTimestamp="2026-01-22 10:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:46.06710687 +0000 UTC m=+1318.725142980" watchObservedRunningTime="2026-01-22 10:58:46.095234351 +0000 UTC m=+1318.753270481" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.058356 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerStarted","Data":"c374f2f5a2e0f81ff52b7822a2002082f23da304a90cb151c0c986aa13c87b3d"} Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.726087 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.851606 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-combined-ca-bundle\") pod \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.851775 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9wpn\" (UniqueName: \"kubernetes.io/projected/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-kube-api-access-q9wpn\") pod \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.851844 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-config-data\") pod \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\" (UID: \"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9\") " Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.863456 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-kube-api-access-q9wpn" (OuterVolumeSpecName: "kube-api-access-q9wpn") pod "f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" (UID: "f20dd1ec-3ab9-4409-bd87-f1ce110f7da9"). InnerVolumeSpecName "kube-api-access-q9wpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.889749 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-config-data" (OuterVolumeSpecName: "config-data") pod "f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" (UID: "f20dd1ec-3ab9-4409-bd87-f1ce110f7da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.903668 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" (UID: "f20dd1ec-3ab9-4409-bd87-f1ce110f7da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.954538 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.954585 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9wpn\" (UniqueName: \"kubernetes.io/projected/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-kube-api-access-q9wpn\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:47 crc kubenswrapper[4975]: I0122 10:58:47.954600 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.010902 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.067473 4975 generic.go:334] "Generic (PLEG): container finished" podID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerID="228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b" exitCode=0 Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.067586 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741","Type":"ContainerDied","Data":"228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b"} Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.068579 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741","Type":"ContainerDied","Data":"c76f0d1736e31e94de3a51dc8080f5ff14524656c130493e8f139aa42b5d9303"} Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.067611 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.068600 4975 scope.go:117] "RemoveContainer" containerID="228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.075227 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerStarted","Data":"63978e89a5c0c4cefac27eccd3eb33be3363f4c281ff37bb95e6d34b444997d0"} Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.076372 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.081016 4975 generic.go:334] "Generic (PLEG): container finished" podID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" exitCode=0 Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.081534 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9","Type":"ContainerDied","Data":"7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a"} Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.081613 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f20dd1ec-3ab9-4409-bd87-f1ce110f7da9","Type":"ContainerDied","Data":"3d9e4fc807412c898ae9807268e86722e46e7c4a181ef5eca8759774ad0358eb"} Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.082235 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.102249 4975 scope.go:117] "RemoveContainer" containerID="6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.117788 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6650886489999999 podStartE2EDuration="5.117765676s" podCreationTimestamp="2026-01-22 10:58:43 +0000 UTC" firstStartedPulling="2026-01-22 10:58:43.970377323 +0000 UTC m=+1316.628413433" lastFinishedPulling="2026-01-22 10:58:47.42305434 +0000 UTC m=+1320.081090460" observedRunningTime="2026-01-22 10:58:48.102683716 +0000 UTC m=+1320.760719846" watchObservedRunningTime="2026-01-22 10:58:48.117765676 +0000 UTC m=+1320.775801806" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.135881 4975 scope.go:117] "RemoveContainer" containerID="228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b" Jan 22 10:58:48 crc kubenswrapper[4975]: E0122 10:58:48.137253 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b\": container with ID starting with 228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b not found: ID does not exist" containerID="228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.137287 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b"} err="failed to get container status \"228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b\": rpc error: code = NotFound desc = could not find container \"228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b\": container with ID starting with 228efefcb710e1103eac202992785b14950a0aeb26d9043e911293da72f75b9b not found: ID does not exist" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.137307 4975 scope.go:117] "RemoveContainer" containerID="6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f" Jan 22 10:58:48 crc kubenswrapper[4975]: E0122 10:58:48.137649 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f\": container with ID starting with 6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f not found: ID does not exist" containerID="6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.137685 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f"} err="failed to get container status \"6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f\": rpc error: code = NotFound desc = could not find container \"6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f\": container with ID starting with 6e88ee718dda87725f2cc96931f86ca5c5ee0d70b083638f17794c998ac1010f not found: ID does not exist" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.137701 4975 scope.go:117] "RemoveContainer" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.140152 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.157680 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74sl9\" (UniqueName: \"kubernetes.io/projected/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-kube-api-access-74sl9\") pod \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.158014 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-combined-ca-bundle\") pod \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.158178 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-logs\") pod \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.158414 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-config-data\") pod \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\" (UID: \"4a7a50a0-2077-48c5-8a6d-e0aaa4c43741\") " Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.159029 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-logs" (OuterVolumeSpecName: "logs") pod "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" (UID: "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.161457 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-kube-api-access-74sl9" (OuterVolumeSpecName: "kube-api-access-74sl9") pod "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" (UID: "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741"). InnerVolumeSpecName "kube-api-access-74sl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.169522 4975 scope.go:117] "RemoveContainer" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.169769 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: E0122 10:58:48.171444 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a\": container with ID starting with 7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a not found: ID does not exist" containerID="7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.171567 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a"} err="failed to get container status \"7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a\": rpc error: code = NotFound desc = could not find container \"7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a\": container with ID starting with 7a63ff047b4c47026ceb743c5474edf5cd2982f24047c20f4d27e98392c5707a not found: ID does not exist" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.177463 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: E0122 10:58:48.177909 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-log" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.177935 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-log" Jan 22 10:58:48 crc kubenswrapper[4975]: E0122 10:58:48.177952 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-api" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.177959 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-api" Jan 22 10:58:48 crc kubenswrapper[4975]: E0122 10:58:48.177983 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" containerName="nova-scheduler-scheduler" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.177989 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" containerName="nova-scheduler-scheduler" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.178165 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-log" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.178184 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" containerName="nova-scheduler-scheduler" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.178196 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" containerName="nova-api-api" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.178759 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.180279 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.184250 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-config-data" (OuterVolumeSpecName: "config-data") pod "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" (UID: "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.188614 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" (UID: "4a7a50a0-2077-48c5-8a6d-e0aaa4c43741"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.190426 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.261935 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.262155 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-config-data\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.262406 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74twl\" (UniqueName: \"kubernetes.io/projected/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-kube-api-access-74twl\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.262832 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74sl9\" (UniqueName: \"kubernetes.io/projected/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-kube-api-access-74sl9\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.262859 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.262874 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.262915 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.364380 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-config-data\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.364432 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74twl\" (UniqueName: \"kubernetes.io/projected/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-kube-api-access-74twl\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.364669 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.369541 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-config-data\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.371967 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.382816 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74twl\" (UniqueName: \"kubernetes.io/projected/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-kube-api-access-74twl\") pod \"nova-scheduler-0\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.494559 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.502806 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.516567 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.538090 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.541093 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.544015 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.547798 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.671023 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.671421 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khndv\" (UniqueName: \"kubernetes.io/projected/3db2ac85-1801-4446-8617-e9d16bf48742-kube-api-access-khndv\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.671457 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2ac85-1801-4446-8617-e9d16bf48742-logs\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.671735 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-config-data\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.776529 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.776689 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khndv\" (UniqueName: \"kubernetes.io/projected/3db2ac85-1801-4446-8617-e9d16bf48742-kube-api-access-khndv\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.776736 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2ac85-1801-4446-8617-e9d16bf48742-logs\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.776784 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-config-data\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.778254 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2ac85-1801-4446-8617-e9d16bf48742-logs\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.785131 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.801072 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khndv\" (UniqueName: \"kubernetes.io/projected/3db2ac85-1801-4446-8617-e9d16bf48742-kube-api-access-khndv\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.804997 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-config-data\") pod \"nova-api-0\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " pod="openstack/nova-api-0" Jan 22 10:58:48 crc kubenswrapper[4975]: I0122 10:58:48.934826 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.007056 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:58:49 crc kubenswrapper[4975]: W0122 10:58:49.015154 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7e45d19_2f50_4b5b_96b5_d5ac9601938d.slice/crio-9fb8e2f69b7f99d905613e1363b04821ef0c65b790ffc75fa72853646402f63e WatchSource:0}: Error finding container 9fb8e2f69b7f99d905613e1363b04821ef0c65b790ffc75fa72853646402f63e: Status 404 returned error can't find the container with id 9fb8e2f69b7f99d905613e1363b04821ef0c65b790ffc75fa72853646402f63e Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.093931 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7e45d19-2f50-4b5b-96b5-d5ac9601938d","Type":"ContainerStarted","Data":"9fb8e2f69b7f99d905613e1363b04821ef0c65b790ffc75fa72853646402f63e"} Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.223642 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:58:49 crc kubenswrapper[4975]: W0122 10:58:49.226715 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db2ac85_1801_4446_8617_e9d16bf48742.slice/crio-1b857f0e377fb7d5ee2d92f53d7e8179532e330c6446de3c257504da075e8bdd WatchSource:0}: Error finding container 1b857f0e377fb7d5ee2d92f53d7e8179532e330c6446de3c257504da075e8bdd: Status 404 returned error can't find the container with id 1b857f0e377fb7d5ee2d92f53d7e8179532e330c6446de3c257504da075e8bdd Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.492981 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.493249 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.768068 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7a50a0-2077-48c5-8a6d-e0aaa4c43741" path="/var/lib/kubelet/pods/4a7a50a0-2077-48c5-8a6d-e0aaa4c43741/volumes" Jan 22 10:58:49 crc kubenswrapper[4975]: I0122 10:58:49.768929 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20dd1ec-3ab9-4409-bd87-f1ce110f7da9" path="/var/lib/kubelet/pods/f20dd1ec-3ab9-4409-bd87-f1ce110f7da9/volumes" Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.113209 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7e45d19-2f50-4b5b-96b5-d5ac9601938d","Type":"ContainerStarted","Data":"3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4"} Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.115653 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3db2ac85-1801-4446-8617-e9d16bf48742","Type":"ContainerStarted","Data":"ba8258b01f99bcfc8a74a67ced8147534b545b8d24f19698d7fae85399d4439c"} Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.115684 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3db2ac85-1801-4446-8617-e9d16bf48742","Type":"ContainerStarted","Data":"32e2f49b13cf0c79486b66f8142a0cdf6912c04fb21b6cfd2f04df5aa59d744f"} Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.115694 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3db2ac85-1801-4446-8617-e9d16bf48742","Type":"ContainerStarted","Data":"1b857f0e377fb7d5ee2d92f53d7e8179532e330c6446de3c257504da075e8bdd"} Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.148061 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.148038325 podStartE2EDuration="2.148038325s" podCreationTimestamp="2026-01-22 10:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:50.146837902 +0000 UTC m=+1322.804874012" watchObservedRunningTime="2026-01-22 10:58:50.148038325 +0000 UTC m=+1322.806074445" Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.164937 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.164917734 podStartE2EDuration="2.164917734s" podCreationTimestamp="2026-01-22 10:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:58:50.162968601 +0000 UTC m=+1322.821004711" watchObservedRunningTime="2026-01-22 10:58:50.164917734 +0000 UTC m=+1322.822953844" Jan 22 10:58:50 crc kubenswrapper[4975]: I0122 10:58:50.252590 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 10:58:53 crc kubenswrapper[4975]: I0122 10:58:53.503070 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 10:58:54 crc kubenswrapper[4975]: I0122 10:58:54.447159 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 10:58:54 crc kubenswrapper[4975]: I0122 10:58:54.492748 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:58:54 crc kubenswrapper[4975]: I0122 10:58:54.492840 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:58:55 crc kubenswrapper[4975]: I0122 10:58:55.508582 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:58:55 crc kubenswrapper[4975]: I0122 10:58:55.508568 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:58:57 crc kubenswrapper[4975]: I0122 10:58:57.436526 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:58:57 crc kubenswrapper[4975]: I0122 10:58:57.437044 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:58:58 crc kubenswrapper[4975]: I0122 10:58:58.502970 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 10:58:58 crc kubenswrapper[4975]: I0122 10:58:58.535373 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 10:58:58 crc kubenswrapper[4975]: I0122 10:58:58.935464 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:58:58 crc kubenswrapper[4975]: I0122 10:58:58.935507 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:58:59 crc kubenswrapper[4975]: I0122 10:58:59.240778 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 10:58:59 crc kubenswrapper[4975]: I0122 10:58:59.977681 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:58:59 crc kubenswrapper[4975]: I0122 10:58:59.977703 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:59:04 crc kubenswrapper[4975]: I0122 10:59:04.504559 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:59:04 crc kubenswrapper[4975]: I0122 10:59:04.505701 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:59:04 crc kubenswrapper[4975]: I0122 10:59:04.511688 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:59:05 crc kubenswrapper[4975]: I0122 10:59:05.272520 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:59:07 crc kubenswrapper[4975]: I0122 10:59:07.281397 4975 generic.go:334] "Generic (PLEG): container finished" podID="6f967745-7b78-4a9d-be66-b94128b04cf2" containerID="14a0b0c0595a809a63905ee9879a4981e5d51c008c7ddb6dfaf5c98179b7cf55" exitCode=137 Jan 22 10:59:07 crc kubenswrapper[4975]: I0122 10:59:07.281504 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f967745-7b78-4a9d-be66-b94128b04cf2","Type":"ContainerDied","Data":"14a0b0c0595a809a63905ee9879a4981e5d51c008c7ddb6dfaf5c98179b7cf55"} Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.209870 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.292043 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f967745-7b78-4a9d-be66-b94128b04cf2","Type":"ContainerDied","Data":"c318a3bd21a866c0de49955e2dc0d4bcdaa75257f5d163b763b6967c8938651d"} Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.292095 4975 scope.go:117] "RemoveContainer" containerID="14a0b0c0595a809a63905ee9879a4981e5d51c008c7ddb6dfaf5c98179b7cf55" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.292223 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.336453 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-config-data\") pod \"6f967745-7b78-4a9d-be66-b94128b04cf2\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.336873 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjwl\" (UniqueName: \"kubernetes.io/projected/6f967745-7b78-4a9d-be66-b94128b04cf2-kube-api-access-zgjwl\") pod \"6f967745-7b78-4a9d-be66-b94128b04cf2\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.336996 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-combined-ca-bundle\") pod \"6f967745-7b78-4a9d-be66-b94128b04cf2\" (UID: \"6f967745-7b78-4a9d-be66-b94128b04cf2\") " Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.341527 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f967745-7b78-4a9d-be66-b94128b04cf2-kube-api-access-zgjwl" (OuterVolumeSpecName: "kube-api-access-zgjwl") pod "6f967745-7b78-4a9d-be66-b94128b04cf2" (UID: "6f967745-7b78-4a9d-be66-b94128b04cf2"). InnerVolumeSpecName "kube-api-access-zgjwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.361736 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f967745-7b78-4a9d-be66-b94128b04cf2" (UID: "6f967745-7b78-4a9d-be66-b94128b04cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.382808 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-config-data" (OuterVolumeSpecName: "config-data") pod "6f967745-7b78-4a9d-be66-b94128b04cf2" (UID: "6f967745-7b78-4a9d-be66-b94128b04cf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.439386 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.439413 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f967745-7b78-4a9d-be66-b94128b04cf2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.439425 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjwl\" (UniqueName: \"kubernetes.io/projected/6f967745-7b78-4a9d-be66-b94128b04cf2-kube-api-access-zgjwl\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.647447 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.672905 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.684166 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:59:08 crc kubenswrapper[4975]: E0122 10:59:08.684587 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f967745-7b78-4a9d-be66-b94128b04cf2" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.684605 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f967745-7b78-4a9d-be66-b94128b04cf2" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.684804 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f967745-7b78-4a9d-be66-b94128b04cf2" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.689049 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.696006 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.696691 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.696994 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.699375 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.856482 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.856581 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.856622 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.856683 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hz92\" (UniqueName: \"kubernetes.io/projected/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-kube-api-access-7hz92\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.857098 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.938986 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.958481 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.958523 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.959443 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.959663 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hz92\" (UniqueName: \"kubernetes.io/projected/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-kube-api-access-7hz92\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.959855 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.960004 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.960131 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.967371 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.967493 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.968311 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.970652 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.972164 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:08 crc kubenswrapper[4975]: I0122 10:59:08.979117 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hz92\" (UniqueName: \"kubernetes.io/projected/1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec-kube-api-access-7hz92\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.018892 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.313977 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.317811 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.492384 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-bp68q"] Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.494849 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.521657 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-bp68q"] Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.564013 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.672900 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tltg5\" (UniqueName: \"kubernetes.io/projected/609a3918-aa87-49ed-818b-bcba858623a2-kube-api-access-tltg5\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.673023 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.673077 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-config\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.673109 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.673206 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.673249 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.766612 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f967745-7b78-4a9d-be66-b94128b04cf2" path="/var/lib/kubelet/pods/6f967745-7b78-4a9d-be66-b94128b04cf2/volumes" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.775745 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-config\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.775794 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.775815 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.775832 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.775888 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tltg5\" (UniqueName: \"kubernetes.io/projected/609a3918-aa87-49ed-818b-bcba858623a2-kube-api-access-tltg5\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.775980 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.776692 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.776756 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.777524 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-config\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.777752 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.777971 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.793674 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tltg5\" (UniqueName: \"kubernetes.io/projected/609a3918-aa87-49ed-818b-bcba858623a2-kube-api-access-tltg5\") pod \"dnsmasq-dns-89c5cd4d5-bp68q\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:09 crc kubenswrapper[4975]: I0122 10:59:09.817286 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:10 crc kubenswrapper[4975]: I0122 10:59:10.262428 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-bp68q"] Jan 22 10:59:10 crc kubenswrapper[4975]: W0122 10:59:10.265218 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod609a3918_aa87_49ed_818b_bcba858623a2.slice/crio-4f03837ab15b7fcede43f28bf5cef87caba2df8c722b7d6970349ec7db368a5c WatchSource:0}: Error finding container 4f03837ab15b7fcede43f28bf5cef87caba2df8c722b7d6970349ec7db368a5c: Status 404 returned error can't find the container with id 4f03837ab15b7fcede43f28bf5cef87caba2df8c722b7d6970349ec7db368a5c Jan 22 10:59:10 crc kubenswrapper[4975]: I0122 10:59:10.322589 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec","Type":"ContainerStarted","Data":"3398d7bf571c56efa28720611addc45424cfcf92c33b8504b96ea2fea5e36f1a"} Jan 22 10:59:10 crc kubenswrapper[4975]: I0122 10:59:10.324059 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" event={"ID":"609a3918-aa87-49ed-818b-bcba858623a2","Type":"ContainerStarted","Data":"4f03837ab15b7fcede43f28bf5cef87caba2df8c722b7d6970349ec7db368a5c"} Jan 22 10:59:11 crc kubenswrapper[4975]: I0122 10:59:11.791129 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:12 crc kubenswrapper[4975]: I0122 10:59:12.338937 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-log" containerID="cri-o://32e2f49b13cf0c79486b66f8142a0cdf6912c04fb21b6cfd2f04df5aa59d744f" gracePeriod=30 Jan 22 10:59:12 crc kubenswrapper[4975]: I0122 10:59:12.339075 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-api" containerID="cri-o://ba8258b01f99bcfc8a74a67ced8147534b545b8d24f19698d7fae85399d4439c" gracePeriod=30 Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.347290 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec","Type":"ContainerStarted","Data":"d54d72ea0275dea5b1cad6f071ccb2eead232e02f8e17ad9fd3b4791baa9b9c7"} Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.348782 4975 generic.go:334] "Generic (PLEG): container finished" podID="609a3918-aa87-49ed-818b-bcba858623a2" containerID="69e4f37b817c3814aa8e46c00065b5e6aa0f35dd8b3db880c4cfe0b2b3586440" exitCode=0 Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.348862 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" event={"ID":"609a3918-aa87-49ed-818b-bcba858623a2","Type":"ContainerDied","Data":"69e4f37b817c3814aa8e46c00065b5e6aa0f35dd8b3db880c4cfe0b2b3586440"} Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.352020 4975 generic.go:334] "Generic (PLEG): container finished" podID="3db2ac85-1801-4446-8617-e9d16bf48742" containerID="32e2f49b13cf0c79486b66f8142a0cdf6912c04fb21b6cfd2f04df5aa59d744f" exitCode=143 Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.352050 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3db2ac85-1801-4446-8617-e9d16bf48742","Type":"ContainerDied","Data":"32e2f49b13cf0c79486b66f8142a0cdf6912c04fb21b6cfd2f04df5aa59d744f"} Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.371580 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.37155922 podStartE2EDuration="5.37155922s" podCreationTimestamp="2026-01-22 10:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:13.364831923 +0000 UTC m=+1346.022868033" watchObservedRunningTime="2026-01-22 10:59:13.37155922 +0000 UTC m=+1346.029595330" Jan 22 10:59:13 crc kubenswrapper[4975]: I0122 10:59:13.626070 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 10:59:14 crc kubenswrapper[4975]: I0122 10:59:14.019696 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:14 crc kubenswrapper[4975]: I0122 10:59:14.363916 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" event={"ID":"609a3918-aa87-49ed-818b-bcba858623a2","Type":"ContainerStarted","Data":"ff6e45dfd32a446678f9bac33e1c9b7153a7ade6fe4fd2c771fc91296344d7b5"} Jan 22 10:59:14 crc kubenswrapper[4975]: I0122 10:59:14.387042 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" podStartSLOduration=5.387023969 podStartE2EDuration="5.387023969s" podCreationTimestamp="2026-01-22 10:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:14.380293401 +0000 UTC m=+1347.038329521" watchObservedRunningTime="2026-01-22 10:59:14.387023969 +0000 UTC m=+1347.045060079" Jan 22 10:59:14 crc kubenswrapper[4975]: I0122 10:59:14.817832 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:15 crc kubenswrapper[4975]: I0122 10:59:15.288141 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:59:15 crc kubenswrapper[4975]: I0122 10:59:15.288424 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-central-agent" containerID="cri-o://5f920509d608cadcbe4726fdc850f379eb721d1f83b98ff61b1b2634412e44ca" gracePeriod=30 Jan 22 10:59:15 crc kubenswrapper[4975]: I0122 10:59:15.288455 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="proxy-httpd" containerID="cri-o://63978e89a5c0c4cefac27eccd3eb33be3363f4c281ff37bb95e6d34b444997d0" gracePeriod=30 Jan 22 10:59:15 crc kubenswrapper[4975]: I0122 10:59:15.288541 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-notification-agent" containerID="cri-o://2ea48f7fb99a6e6288369ef9d1818d9cd8492e4864975bbdaee103aaf373bf51" gracePeriod=30 Jan 22 10:59:15 crc kubenswrapper[4975]: I0122 10:59:15.288559 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="sg-core" containerID="cri-o://c374f2f5a2e0f81ff52b7822a2002082f23da304a90cb151c0c986aa13c87b3d" gracePeriod=30 Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.389857 4975 generic.go:334] "Generic (PLEG): container finished" podID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerID="63978e89a5c0c4cefac27eccd3eb33be3363f4c281ff37bb95e6d34b444997d0" exitCode=0 Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.390153 4975 generic.go:334] "Generic (PLEG): container finished" podID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerID="c374f2f5a2e0f81ff52b7822a2002082f23da304a90cb151c0c986aa13c87b3d" exitCode=2 Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.390168 4975 generic.go:334] "Generic (PLEG): container finished" podID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerID="5f920509d608cadcbe4726fdc850f379eb721d1f83b98ff61b1b2634412e44ca" exitCode=0 Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.390049 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerDied","Data":"63978e89a5c0c4cefac27eccd3eb33be3363f4c281ff37bb95e6d34b444997d0"} Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.390235 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerDied","Data":"c374f2f5a2e0f81ff52b7822a2002082f23da304a90cb151c0c986aa13c87b3d"} Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.390253 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerDied","Data":"5f920509d608cadcbe4726fdc850f379eb721d1f83b98ff61b1b2634412e44ca"} Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.393319 4975 generic.go:334] "Generic (PLEG): container finished" podID="3db2ac85-1801-4446-8617-e9d16bf48742" containerID="ba8258b01f99bcfc8a74a67ced8147534b545b8d24f19698d7fae85399d4439c" exitCode=0 Jan 22 10:59:16 crc kubenswrapper[4975]: I0122 10:59:16.394403 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3db2ac85-1801-4446-8617-e9d16bf48742","Type":"ContainerDied","Data":"ba8258b01f99bcfc8a74a67ced8147534b545b8d24f19698d7fae85399d4439c"} Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.403570 4975 generic.go:334] "Generic (PLEG): container finished" podID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerID="2ea48f7fb99a6e6288369ef9d1818d9cd8492e4864975bbdaee103aaf373bf51" exitCode=0 Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.403768 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerDied","Data":"2ea48f7fb99a6e6288369ef9d1818d9cd8492e4864975bbdaee103aaf373bf51"} Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.792881 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.934893 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2ac85-1801-4446-8617-e9d16bf48742-logs\") pod \"3db2ac85-1801-4446-8617-e9d16bf48742\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.935455 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-config-data\") pod \"3db2ac85-1801-4446-8617-e9d16bf48742\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.935508 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-combined-ca-bundle\") pod \"3db2ac85-1801-4446-8617-e9d16bf48742\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.935566 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khndv\" (UniqueName: \"kubernetes.io/projected/3db2ac85-1801-4446-8617-e9d16bf48742-kube-api-access-khndv\") pod \"3db2ac85-1801-4446-8617-e9d16bf48742\" (UID: \"3db2ac85-1801-4446-8617-e9d16bf48742\") " Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.935687 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db2ac85-1801-4446-8617-e9d16bf48742-logs" (OuterVolumeSpecName: "logs") pod "3db2ac85-1801-4446-8617-e9d16bf48742" (UID: "3db2ac85-1801-4446-8617-e9d16bf48742"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.935953 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db2ac85-1801-4446-8617-e9d16bf48742-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.941420 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db2ac85-1801-4446-8617-e9d16bf48742-kube-api-access-khndv" (OuterVolumeSpecName: "kube-api-access-khndv") pod "3db2ac85-1801-4446-8617-e9d16bf48742" (UID: "3db2ac85-1801-4446-8617-e9d16bf48742"). InnerVolumeSpecName "kube-api-access-khndv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.974541 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-config-data" (OuterVolumeSpecName: "config-data") pod "3db2ac85-1801-4446-8617-e9d16bf48742" (UID: "3db2ac85-1801-4446-8617-e9d16bf48742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:17 crc kubenswrapper[4975]: I0122 10:59:17.974881 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3db2ac85-1801-4446-8617-e9d16bf48742" (UID: "3db2ac85-1801-4446-8617-e9d16bf48742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.043320 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.043387 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db2ac85-1801-4446-8617-e9d16bf48742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.043405 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khndv\" (UniqueName: \"kubernetes.io/projected/3db2ac85-1801-4446-8617-e9d16bf48742-kube-api-access-khndv\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.047912 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.145144 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-run-httpd\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.145683 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-config-data\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.145629 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.145763 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-combined-ca-bundle\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.146134 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-sg-core-conf-yaml\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.146174 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-scripts\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.146203 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp8xz\" (UniqueName: \"kubernetes.io/projected/34de56a7-2ba9-4841-9489-2c0e89be46dd-kube-api-access-wp8xz\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.146238 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-log-httpd\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.146359 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-ceilometer-tls-certs\") pod \"34de56a7-2ba9-4841-9489-2c0e89be46dd\" (UID: \"34de56a7-2ba9-4841-9489-2c0e89be46dd\") " Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.146800 4975 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.149127 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.149900 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-scripts" (OuterVolumeSpecName: "scripts") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.151306 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34de56a7-2ba9-4841-9489-2c0e89be46dd-kube-api-access-wp8xz" (OuterVolumeSpecName: "kube-api-access-wp8xz") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "kube-api-access-wp8xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.176239 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.205067 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.249673 4975 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.249715 4975 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.249729 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.249741 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp8xz\" (UniqueName: \"kubernetes.io/projected/34de56a7-2ba9-4841-9489-2c0e89be46dd-kube-api-access-wp8xz\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.249754 4975 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34de56a7-2ba9-4841-9489-2c0e89be46dd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.263529 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.273129 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-config-data" (OuterVolumeSpecName: "config-data") pod "34de56a7-2ba9-4841-9489-2c0e89be46dd" (UID: "34de56a7-2ba9-4841-9489-2c0e89be46dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.351697 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.351742 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34de56a7-2ba9-4841-9489-2c0e89be46dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.417025 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34de56a7-2ba9-4841-9489-2c0e89be46dd","Type":"ContainerDied","Data":"1055b2f92ba80bfaadf5c1f684d3c30aa520fadcdf83571ca314d967a7e63c90"} Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.417104 4975 scope.go:117] "RemoveContainer" containerID="63978e89a5c0c4cefac27eccd3eb33be3363f4c281ff37bb95e6d34b444997d0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.417038 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.421049 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3db2ac85-1801-4446-8617-e9d16bf48742","Type":"ContainerDied","Data":"1b857f0e377fb7d5ee2d92f53d7e8179532e330c6446de3c257504da075e8bdd"} Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.421199 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.442294 4975 scope.go:117] "RemoveContainer" containerID="c374f2f5a2e0f81ff52b7822a2002082f23da304a90cb151c0c986aa13c87b3d" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.475644 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.481409 4975 scope.go:117] "RemoveContainer" containerID="2ea48f7fb99a6e6288369ef9d1818d9cd8492e4864975bbdaee103aaf373bf51" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.493743 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.505345 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.513102 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.527515 4975 scope.go:117] "RemoveContainer" containerID="5f920509d608cadcbe4726fdc850f379eb721d1f83b98ff61b1b2634412e44ca" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.527609 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: E0122 10:59:18.527952 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-api" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.527966 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-api" Jan 22 10:59:18 crc kubenswrapper[4975]: E0122 10:59:18.527982 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-notification-agent" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.527988 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-notification-agent" Jan 22 10:59:18 crc kubenswrapper[4975]: E0122 10:59:18.527998 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-central-agent" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528004 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-central-agent" Jan 22 10:59:18 crc kubenswrapper[4975]: E0122 10:59:18.528021 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="proxy-httpd" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528026 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="proxy-httpd" Jan 22 10:59:18 crc kubenswrapper[4975]: E0122 10:59:18.528049 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="sg-core" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528054 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="sg-core" Jan 22 10:59:18 crc kubenswrapper[4975]: E0122 10:59:18.528065 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-log" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528070 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-log" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528219 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="proxy-httpd" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528238 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-notification-agent" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528247 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="ceilometer-central-agent" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528257 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-log" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528270 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" containerName="sg-core" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.528279 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" containerName="nova-api-api" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.529100 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.555952 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.558159 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.585927 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.586515 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.586436 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.586763 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.586883 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.586986 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.601942 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.614066 4975 scope.go:117] "RemoveContainer" containerID="ba8258b01f99bcfc8a74a67ced8147534b545b8d24f19698d7fae85399d4439c" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.618173 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.644075 4975 scope.go:117] "RemoveContainer" containerID="32e2f49b13cf0c79486b66f8142a0cdf6912c04fb21b6cfd2f04df5aa59d744f" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.674634 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xp4\" (UniqueName: \"kubernetes.io/projected/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-kube-api-access-n6xp4\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.674795 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjl5\" (UniqueName: \"kubernetes.io/projected/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-kube-api-access-4jjl5\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.674840 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-scripts\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.674903 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-config-data\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.674937 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-config-data\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.674953 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675006 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675024 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675038 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-log-httpd\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675063 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675084 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-public-tls-certs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675128 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-logs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675158 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.675176 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-run-httpd\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776154 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776193 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776209 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-log-httpd\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776243 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776262 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-public-tls-certs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776309 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-logs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776342 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776356 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-run-httpd\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776374 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6xp4\" (UniqueName: \"kubernetes.io/projected/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-kube-api-access-n6xp4\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776395 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jjl5\" (UniqueName: \"kubernetes.io/projected/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-kube-api-access-4jjl5\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776411 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-scripts\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776441 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-config-data\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776466 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-config-data\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.776483 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.778076 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-log-httpd\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.778608 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-logs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.779628 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-run-httpd\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.782507 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.782609 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.783639 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-public-tls-certs\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.784292 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-config-data\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.784415 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.786986 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.787942 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-config-data\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.788206 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-scripts\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.801958 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.817638 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6xp4\" (UniqueName: \"kubernetes.io/projected/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-kube-api-access-n6xp4\") pod \"nova-api-0\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.822491 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jjl5\" (UniqueName: \"kubernetes.io/projected/e8833648-ddd8-4847-b2b0-25d3f8ac08c3-kube-api-access-4jjl5\") pod \"ceilometer-0\" (UID: \"e8833648-ddd8-4847-b2b0-25d3f8ac08c3\") " pod="openstack/ceilometer-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.891151 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:18 crc kubenswrapper[4975]: I0122 10:59:18.907395 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.020058 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.053896 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.336447 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.440591 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.444519 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1","Type":"ContainerStarted","Data":"e65c6aaba16acfd4948cebb681c2f585923b8afda6a53f02fe4370fcc29f0831"} Jan 22 10:59:19 crc kubenswrapper[4975]: W0122 10:59:19.455532 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8833648_ddd8_4847_b2b0_25d3f8ac08c3.slice/crio-8a2817ed5d15b05262e4b9682d672579c82ea8988252af6cb49365abd197eb8f WatchSource:0}: Error finding container 8a2817ed5d15b05262e4b9682d672579c82ea8988252af6cb49365abd197eb8f: Status 404 returned error can't find the container with id 8a2817ed5d15b05262e4b9682d672579c82ea8988252af6cb49365abd197eb8f Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.457085 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.490574 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.768204 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34de56a7-2ba9-4841-9489-2c0e89be46dd" path="/var/lib/kubelet/pods/34de56a7-2ba9-4841-9489-2c0e89be46dd/volumes" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.769416 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db2ac85-1801-4446-8617-e9d16bf48742" path="/var/lib/kubelet/pods/3db2ac85-1801-4446-8617-e9d16bf48742/volumes" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.770078 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hj9wd"] Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.771207 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hj9wd"] Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.771288 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.773359 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.775085 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.819511 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.886649 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-qztph"] Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.886879 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-qztph" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="dnsmasq-dns" containerID="cri-o://b3b471e738dd8b620dd03840b6ceb0773c4d3059578a2b75c6359db70197ba79" gracePeriod=10 Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.901269 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.901337 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pczq\" (UniqueName: \"kubernetes.io/projected/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-kube-api-access-5pczq\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.901381 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-scripts\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:19 crc kubenswrapper[4975]: I0122 10:59:19.901428 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-config-data\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.003261 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.003320 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pczq\" (UniqueName: \"kubernetes.io/projected/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-kube-api-access-5pczq\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.003383 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-scripts\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.003423 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-config-data\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.010528 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-scripts\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.011185 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.017543 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-config-data\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.020861 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pczq\" (UniqueName: \"kubernetes.io/projected/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-kube-api-access-5pczq\") pod \"nova-cell1-cell-mapping-hj9wd\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.094901 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.496576 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8833648-ddd8-4847-b2b0-25d3f8ac08c3","Type":"ContainerStarted","Data":"8a2817ed5d15b05262e4b9682d672579c82ea8988252af6cb49365abd197eb8f"} Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.525012 4975 generic.go:334] "Generic (PLEG): container finished" podID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerID="b3b471e738dd8b620dd03840b6ceb0773c4d3059578a2b75c6359db70197ba79" exitCode=0 Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.525105 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-qztph" event={"ID":"c4f4b9fe-d65b-4778-a38e-704314e4cf34","Type":"ContainerDied","Data":"b3b471e738dd8b620dd03840b6ceb0773c4d3059578a2b75c6359db70197ba79"} Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.539938 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1","Type":"ContainerStarted","Data":"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512"} Jan 22 10:59:20 crc kubenswrapper[4975]: I0122 10:59:20.578009 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hj9wd"] Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.005200 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.135842 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-config\") pod \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.135936 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d74jk\" (UniqueName: \"kubernetes.io/projected/c4f4b9fe-d65b-4778-a38e-704314e4cf34-kube-api-access-d74jk\") pod \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.135985 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-swift-storage-0\") pod \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.136106 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-sb\") pod \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.136144 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-svc\") pod \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.136165 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-nb\") pod \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\" (UID: \"c4f4b9fe-d65b-4778-a38e-704314e4cf34\") " Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.141635 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f4b9fe-d65b-4778-a38e-704314e4cf34-kube-api-access-d74jk" (OuterVolumeSpecName: "kube-api-access-d74jk") pod "c4f4b9fe-d65b-4778-a38e-704314e4cf34" (UID: "c4f4b9fe-d65b-4778-a38e-704314e4cf34"). InnerVolumeSpecName "kube-api-access-d74jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.184185 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4f4b9fe-d65b-4778-a38e-704314e4cf34" (UID: "c4f4b9fe-d65b-4778-a38e-704314e4cf34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.186845 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-config" (OuterVolumeSpecName: "config") pod "c4f4b9fe-d65b-4778-a38e-704314e4cf34" (UID: "c4f4b9fe-d65b-4778-a38e-704314e4cf34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.188870 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c4f4b9fe-d65b-4778-a38e-704314e4cf34" (UID: "c4f4b9fe-d65b-4778-a38e-704314e4cf34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.198154 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4f4b9fe-d65b-4778-a38e-704314e4cf34" (UID: "c4f4b9fe-d65b-4778-a38e-704314e4cf34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.209934 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4f4b9fe-d65b-4778-a38e-704314e4cf34" (UID: "c4f4b9fe-d65b-4778-a38e-704314e4cf34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.238835 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d74jk\" (UniqueName: \"kubernetes.io/projected/c4f4b9fe-d65b-4778-a38e-704314e4cf34-kube-api-access-d74jk\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.238870 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.238921 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.238931 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.238940 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.238951 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f4b9fe-d65b-4778-a38e-704314e4cf34-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.548517 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8833648-ddd8-4847-b2b0-25d3f8ac08c3","Type":"ContainerStarted","Data":"23f601036120a585cbf6caf73acb8a1e1eaaba2a62b2c3160e0d056b564af310"} Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.550204 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-qztph" event={"ID":"c4f4b9fe-d65b-4778-a38e-704314e4cf34","Type":"ContainerDied","Data":"a2dbeb93507eb93d8da759d1e3e592609b62231694680194da5239b3cce31cf9"} Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.550238 4975 scope.go:117] "RemoveContainer" containerID="b3b471e738dd8b620dd03840b6ceb0773c4d3059578a2b75c6359db70197ba79" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.550259 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-qztph" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.552571 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hj9wd" event={"ID":"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4","Type":"ContainerStarted","Data":"573c450cb51344bdd22a1e3ab36b330a3420972d6c26172b9cdcae5f0a10a5c7"} Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.552630 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hj9wd" event={"ID":"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4","Type":"ContainerStarted","Data":"440a77b4316ea50ae88a491b55e481446243f1726ed41d68c884b2a22c9d9221"} Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.561646 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1","Type":"ContainerStarted","Data":"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28"} Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.583389 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hj9wd" podStartSLOduration=2.58337145 podStartE2EDuration="2.58337145s" podCreationTimestamp="2026-01-22 10:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:21.569580327 +0000 UTC m=+1354.227616447" watchObservedRunningTime="2026-01-22 10:59:21.58337145 +0000 UTC m=+1354.241407580" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.602029 4975 scope.go:117] "RemoveContainer" containerID="49ce5550d00dd33ce67de4e89597335c9e2175203219c9e7cc4480c91c87af89" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.625020 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-qztph"] Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.642402 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-qztph"] Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.648410 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.648391497 podStartE2EDuration="3.648391497s" podCreationTimestamp="2026-01-22 10:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:21.609620409 +0000 UTC m=+1354.267656529" watchObservedRunningTime="2026-01-22 10:59:21.648391497 +0000 UTC m=+1354.306427607" Jan 22 10:59:21 crc kubenswrapper[4975]: I0122 10:59:21.763696 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" path="/var/lib/kubelet/pods/c4f4b9fe-d65b-4778-a38e-704314e4cf34/volumes" Jan 22 10:59:23 crc kubenswrapper[4975]: I0122 10:59:23.590153 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8833648-ddd8-4847-b2b0-25d3f8ac08c3","Type":"ContainerStarted","Data":"f4affb545e7ac091d1f9343649c8d6201db31577871da0591c3303a5a26658a5"} Jan 22 10:59:23 crc kubenswrapper[4975]: I0122 10:59:23.590575 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8833648-ddd8-4847-b2b0-25d3f8ac08c3","Type":"ContainerStarted","Data":"fc628f98a385ac25d2cae3faa9e43d1e3d0f5637d9de6c8a5603eedd5f5f2f5c"} Jan 22 10:59:25 crc kubenswrapper[4975]: I0122 10:59:25.752615 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-qztph" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: i/o timeout" Jan 22 10:59:26 crc kubenswrapper[4975]: I0122 10:59:26.626888 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8833648-ddd8-4847-b2b0-25d3f8ac08c3","Type":"ContainerStarted","Data":"7c02d3a5fa3254dcb0181e750dbff36ee37251d7025ace8ea205e0df3fb15ff0"} Jan 22 10:59:26 crc kubenswrapper[4975]: I0122 10:59:26.627362 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:59:26 crc kubenswrapper[4975]: I0122 10:59:26.631220 4975 generic.go:334] "Generic (PLEG): container finished" podID="9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" containerID="573c450cb51344bdd22a1e3ab36b330a3420972d6c26172b9cdcae5f0a10a5c7" exitCode=0 Jan 22 10:59:26 crc kubenswrapper[4975]: I0122 10:59:26.631306 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hj9wd" event={"ID":"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4","Type":"ContainerDied","Data":"573c450cb51344bdd22a1e3ab36b330a3420972d6c26172b9cdcae5f0a10a5c7"} Jan 22 10:59:26 crc kubenswrapper[4975]: I0122 10:59:26.674589 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9883746159999998 podStartE2EDuration="8.67456724s" podCreationTimestamp="2026-01-22 10:59:18 +0000 UTC" firstStartedPulling="2026-01-22 10:59:19.456880767 +0000 UTC m=+1352.114916877" lastFinishedPulling="2026-01-22 10:59:25.143073391 +0000 UTC m=+1357.801109501" observedRunningTime="2026-01-22 10:59:26.658178865 +0000 UTC m=+1359.316214975" watchObservedRunningTime="2026-01-22 10:59:26.67456724 +0000 UTC m=+1359.332603350" Jan 22 10:59:27 crc kubenswrapper[4975]: I0122 10:59:27.436155 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:59:27 crc kubenswrapper[4975]: I0122 10:59:27.436241 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.070709 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.166902 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-config-data\") pod \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.167224 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-combined-ca-bundle\") pod \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.167469 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pczq\" (UniqueName: \"kubernetes.io/projected/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-kube-api-access-5pczq\") pod \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.167530 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-scripts\") pod \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\" (UID: \"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4\") " Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.172801 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-scripts" (OuterVolumeSpecName: "scripts") pod "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" (UID: "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.186983 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-kube-api-access-5pczq" (OuterVolumeSpecName: "kube-api-access-5pczq") pod "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" (UID: "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4"). InnerVolumeSpecName "kube-api-access-5pczq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.204495 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-config-data" (OuterVolumeSpecName: "config-data") pod "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" (UID: "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.216316 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" (UID: "9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.269974 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.270007 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pczq\" (UniqueName: \"kubernetes.io/projected/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-kube-api-access-5pczq\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.270020 4975 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.270032 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.659909 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hj9wd" event={"ID":"9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4","Type":"ContainerDied","Data":"440a77b4316ea50ae88a491b55e481446243f1726ed41d68c884b2a22c9d9221"} Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.659966 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440a77b4316ea50ae88a491b55e481446243f1726ed41d68c884b2a22c9d9221" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.660067 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hj9wd" Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.872154 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.872734 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-log" containerID="cri-o://e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512" gracePeriod=30 Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.872796 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-api" containerID="cri-o://db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28" gracePeriod=30 Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.896956 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.897616 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a7e45d19-2f50-4b5b-96b5-d5ac9601938d" containerName="nova-scheduler-scheduler" containerID="cri-o://3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4" gracePeriod=30 Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.929873 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.930279 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-log" containerID="cri-o://6b5004b2b9c2f7b5da1a6880837e4a4096659d38403f2eff4dc6d476cd3d3a19" gracePeriod=30 Jan 22 10:59:28 crc kubenswrapper[4975]: I0122 10:59:28.930321 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-metadata" containerID="cri-o://6ee380a46ff83a4ec86931eeeeda60756dcedd49841e71f1f7e81c860253ebdb" gracePeriod=30 Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.575719 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670666 4975 generic.go:334] "Generic (PLEG): container finished" podID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerID="db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28" exitCode=0 Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670698 4975 generic.go:334] "Generic (PLEG): container finished" podID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerID="e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512" exitCode=143 Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670721 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1","Type":"ContainerDied","Data":"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28"} Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670764 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670789 4975 scope.go:117] "RemoveContainer" containerID="db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670775 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1","Type":"ContainerDied","Data":"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512"} Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.670922 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1","Type":"ContainerDied","Data":"e65c6aaba16acfd4948cebb681c2f585923b8afda6a53f02fe4370fcc29f0831"} Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.673244 4975 generic.go:334] "Generic (PLEG): container finished" podID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerID="6b5004b2b9c2f7b5da1a6880837e4a4096659d38403f2eff4dc6d476cd3d3a19" exitCode=143 Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.673271 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e481a916-5eac-4504-b63b-fbef7f1ee1fe","Type":"ContainerDied","Data":"6b5004b2b9c2f7b5da1a6880837e4a4096659d38403f2eff4dc6d476cd3d3a19"} Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.689487 4975 scope.go:117] "RemoveContainer" containerID="e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.706649 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-logs\") pod \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.706725 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-config-data\") pod \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.706768 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-internal-tls-certs\") pod \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.706803 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-combined-ca-bundle\") pod \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.706963 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-public-tls-certs\") pod \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.707023 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6xp4\" (UniqueName: \"kubernetes.io/projected/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-kube-api-access-n6xp4\") pod \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\" (UID: \"0f5131e9-45c8-44d1-8c5f-cc0cef6468a1\") " Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.707471 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-logs" (OuterVolumeSpecName: "logs") pod "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" (UID: "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.707874 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.708619 4975 scope.go:117] "RemoveContainer" containerID="db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28" Jan 22 10:59:29 crc kubenswrapper[4975]: E0122 10:59:29.709317 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28\": container with ID starting with db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28 not found: ID does not exist" containerID="db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.709448 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28"} err="failed to get container status \"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28\": rpc error: code = NotFound desc = could not find container \"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28\": container with ID starting with db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28 not found: ID does not exist" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.709477 4975 scope.go:117] "RemoveContainer" containerID="e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512" Jan 22 10:59:29 crc kubenswrapper[4975]: E0122 10:59:29.709888 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512\": container with ID starting with e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512 not found: ID does not exist" containerID="e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.709922 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512"} err="failed to get container status \"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512\": rpc error: code = NotFound desc = could not find container \"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512\": container with ID starting with e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512 not found: ID does not exist" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.709942 4975 scope.go:117] "RemoveContainer" containerID="db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.710267 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28"} err="failed to get container status \"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28\": rpc error: code = NotFound desc = could not find container \"db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28\": container with ID starting with db2f1fbdc117515166a7f18b6eb9302d1ec4f8cc8351862a46e4aca40a9a4c28 not found: ID does not exist" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.710298 4975 scope.go:117] "RemoveContainer" containerID="e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.710704 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512"} err="failed to get container status \"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512\": rpc error: code = NotFound desc = could not find container \"e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512\": container with ID starting with e13aac70567f9a4ce1101c54e7149b17f7ccd08e3b0907fad1d77a4e67f1a512 not found: ID does not exist" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.715499 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-kube-api-access-n6xp4" (OuterVolumeSpecName: "kube-api-access-n6xp4") pod "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" (UID: "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1"). InnerVolumeSpecName "kube-api-access-n6xp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.740438 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-config-data" (OuterVolumeSpecName: "config-data") pod "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" (UID: "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.747461 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" (UID: "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.753896 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" (UID: "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.787952 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" (UID: "0f5131e9-45c8-44d1-8c5f-cc0cef6468a1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.814731 4975 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.814843 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.814856 4975 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.814864 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6xp4\" (UniqueName: \"kubernetes.io/projected/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-kube-api-access-n6xp4\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:29 crc kubenswrapper[4975]: I0122 10:59:29.814874 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.005383 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.016376 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.035643 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:30 crc kubenswrapper[4975]: E0122 10:59:30.036080 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-log" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036102 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-log" Jan 22 10:59:30 crc kubenswrapper[4975]: E0122 10:59:30.036146 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-api" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036157 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-api" Jan 22 10:59:30 crc kubenswrapper[4975]: E0122 10:59:30.036177 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="dnsmasq-dns" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036184 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="dnsmasq-dns" Jan 22 10:59:30 crc kubenswrapper[4975]: E0122 10:59:30.036199 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="init" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036207 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="init" Jan 22 10:59:30 crc kubenswrapper[4975]: E0122 10:59:30.036219 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" containerName="nova-manage" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036226 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" containerName="nova-manage" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036450 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f4b9fe-d65b-4778-a38e-704314e4cf34" containerName="dnsmasq-dns" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036473 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-api" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036486 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" containerName="nova-manage" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.036507 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" containerName="nova-api-log" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.037683 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.040046 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.040270 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.041344 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.093470 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.120108 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.120160 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff6252b0-1c69-4710-9fd2-ab50960f3495-logs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.120198 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztv5\" (UniqueName: \"kubernetes.io/projected/ff6252b0-1c69-4710-9fd2-ab50960f3495-kube-api-access-cztv5\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.120600 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-config-data\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.120637 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.120737 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.222751 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cztv5\" (UniqueName: \"kubernetes.io/projected/ff6252b0-1c69-4710-9fd2-ab50960f3495-kube-api-access-cztv5\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.222799 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-config-data\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.222826 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.222888 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.222965 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.222991 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff6252b0-1c69-4710-9fd2-ab50960f3495-logs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.223380 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff6252b0-1c69-4710-9fd2-ab50960f3495-logs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.227815 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.228617 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.229805 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.229942 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6252b0-1c69-4710-9fd2-ab50960f3495-config-data\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.240087 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cztv5\" (UniqueName: \"kubernetes.io/projected/ff6252b0-1c69-4710-9fd2-ab50960f3495-kube-api-access-cztv5\") pod \"nova-api-0\" (UID: \"ff6252b0-1c69-4710-9fd2-ab50960f3495\") " pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.420920 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:59:30 crc kubenswrapper[4975]: I0122 10:59:30.968253 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:59:31 crc kubenswrapper[4975]: I0122 10:59:31.721473 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff6252b0-1c69-4710-9fd2-ab50960f3495","Type":"ContainerStarted","Data":"7343fa228110d3def72a0bb2fb70fd1ee3aeeb2b630413ac2960a9959c8009dd"} Jan 22 10:59:31 crc kubenswrapper[4975]: I0122 10:59:31.722105 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff6252b0-1c69-4710-9fd2-ab50960f3495","Type":"ContainerStarted","Data":"2a5395c3438e1d81b94a48bec7cfb63388a99d9f357fec52163009b2ed46a524"} Jan 22 10:59:31 crc kubenswrapper[4975]: I0122 10:59:31.770997 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5131e9-45c8-44d1-8c5f-cc0cef6468a1" path="/var/lib/kubelet/pods/0f5131e9-45c8-44d1-8c5f-cc0cef6468a1/volumes" Jan 22 10:59:32 crc kubenswrapper[4975]: I0122 10:59:32.029748 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:59800->10.217.0.198:8775: read: connection reset by peer" Jan 22 10:59:32 crc kubenswrapper[4975]: I0122 10:59:32.029834 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:59786->10.217.0.198:8775: read: connection reset by peer" Jan 22 10:59:32 crc kubenswrapper[4975]: I0122 10:59:32.734436 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff6252b0-1c69-4710-9fd2-ab50960f3495","Type":"ContainerStarted","Data":"ae9816084540e221ef28342113edc51b206cb10e5f8810ae319f7ef4369863b2"} Jan 22 10:59:32 crc kubenswrapper[4975]: I0122 10:59:32.736972 4975 generic.go:334] "Generic (PLEG): container finished" podID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerID="6ee380a46ff83a4ec86931eeeeda60756dcedd49841e71f1f7e81c860253ebdb" exitCode=0 Jan 22 10:59:32 crc kubenswrapper[4975]: I0122 10:59:32.737022 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e481a916-5eac-4504-b63b-fbef7f1ee1fe","Type":"ContainerDied","Data":"6ee380a46ff83a4ec86931eeeeda60756dcedd49841e71f1f7e81c860253ebdb"} Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.489452 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.585284 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-config-data\") pod \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.585707 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74twl\" (UniqueName: \"kubernetes.io/projected/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-kube-api-access-74twl\") pod \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.585744 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-combined-ca-bundle\") pod \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\" (UID: \"a7e45d19-2f50-4b5b-96b5-d5ac9601938d\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.592403 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-kube-api-access-74twl" (OuterVolumeSpecName: "kube-api-access-74twl") pod "a7e45d19-2f50-4b5b-96b5-d5ac9601938d" (UID: "a7e45d19-2f50-4b5b-96b5-d5ac9601938d"). InnerVolumeSpecName "kube-api-access-74twl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.613358 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7e45d19-2f50-4b5b-96b5-d5ac9601938d" (UID: "a7e45d19-2f50-4b5b-96b5-d5ac9601938d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.617544 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-config-data" (OuterVolumeSpecName: "config-data") pod "a7e45d19-2f50-4b5b-96b5-d5ac9601938d" (UID: "a7e45d19-2f50-4b5b-96b5-d5ac9601938d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.689123 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.689193 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74twl\" (UniqueName: \"kubernetes.io/projected/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-kube-api-access-74twl\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.689204 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e45d19-2f50-4b5b-96b5-d5ac9601938d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.700760 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.751814 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e481a916-5eac-4504-b63b-fbef7f1ee1fe","Type":"ContainerDied","Data":"085d8e8dc803f4b5b793d3e32186b6412c07225f827a2fb093f4c22fd4a7046a"} Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.751866 4975 scope.go:117] "RemoveContainer" containerID="6ee380a46ff83a4ec86931eeeeda60756dcedd49841e71f1f7e81c860253ebdb" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.751990 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.755263 4975 generic.go:334] "Generic (PLEG): container finished" podID="a7e45d19-2f50-4b5b-96b5-d5ac9601938d" containerID="3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4" exitCode=0 Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.756148 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.763469 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7e45d19-2f50-4b5b-96b5-d5ac9601938d","Type":"ContainerDied","Data":"3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4"} Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.779457 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7e45d19-2f50-4b5b-96b5-d5ac9601938d","Type":"ContainerDied","Data":"9fb8e2f69b7f99d905613e1363b04821ef0c65b790ffc75fa72853646402f63e"} Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.790315 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c6qh\" (UniqueName: \"kubernetes.io/projected/e481a916-5eac-4504-b63b-fbef7f1ee1fe-kube-api-access-6c6qh\") pod \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.790430 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-nova-metadata-tls-certs\") pod \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.790554 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e481a916-5eac-4504-b63b-fbef7f1ee1fe-logs\") pod \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.791565 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-config-data\") pod \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.791747 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-combined-ca-bundle\") pod \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\" (UID: \"e481a916-5eac-4504-b63b-fbef7f1ee1fe\") " Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.792715 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e481a916-5eac-4504-b63b-fbef7f1ee1fe-logs" (OuterVolumeSpecName: "logs") pod "e481a916-5eac-4504-b63b-fbef7f1ee1fe" (UID: "e481a916-5eac-4504-b63b-fbef7f1ee1fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.804311 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.8042887309999998 podStartE2EDuration="3.804288731s" podCreationTimestamp="2026-01-22 10:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:33.781723912 +0000 UTC m=+1366.439760032" watchObservedRunningTime="2026-01-22 10:59:33.804288731 +0000 UTC m=+1366.462324851" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.809628 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e481a916-5eac-4504-b63b-fbef7f1ee1fe-kube-api-access-6c6qh" (OuterVolumeSpecName: "kube-api-access-6c6qh") pod "e481a916-5eac-4504-b63b-fbef7f1ee1fe" (UID: "e481a916-5eac-4504-b63b-fbef7f1ee1fe"). InnerVolumeSpecName "kube-api-access-6c6qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.817408 4975 scope.go:117] "RemoveContainer" containerID="6b5004b2b9c2f7b5da1a6880837e4a4096659d38403f2eff4dc6d476cd3d3a19" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.824472 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.828298 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-config-data" (OuterVolumeSpecName: "config-data") pod "e481a916-5eac-4504-b63b-fbef7f1ee1fe" (UID: "e481a916-5eac-4504-b63b-fbef7f1ee1fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.833005 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.851675 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:59:33 crc kubenswrapper[4975]: E0122 10:59:33.852040 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-log" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852056 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-log" Jan 22 10:59:33 crc kubenswrapper[4975]: E0122 10:59:33.852070 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-metadata" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852078 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-metadata" Jan 22 10:59:33 crc kubenswrapper[4975]: E0122 10:59:33.852099 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e45d19-2f50-4b5b-96b5-d5ac9601938d" containerName="nova-scheduler-scheduler" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852105 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e45d19-2f50-4b5b-96b5-d5ac9601938d" containerName="nova-scheduler-scheduler" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852264 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e45d19-2f50-4b5b-96b5-d5ac9601938d" containerName="nova-scheduler-scheduler" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852285 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-log" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852306 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" containerName="nova-metadata-metadata" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.852835 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.853683 4975 scope.go:117] "RemoveContainer" containerID="3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.857399 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.861576 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e481a916-5eac-4504-b63b-fbef7f1ee1fe" (UID: "e481a916-5eac-4504-b63b-fbef7f1ee1fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.885617 4975 scope.go:117] "RemoveContainer" containerID="3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4" Jan 22 10:59:33 crc kubenswrapper[4975]: E0122 10:59:33.886490 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4\": container with ID starting with 3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4 not found: ID does not exist" containerID="3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.886537 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4"} err="failed to get container status \"3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4\": rpc error: code = NotFound desc = could not find container \"3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4\": container with ID starting with 3905c92ab5f630173534b4f8181cc8e4be30b1e10bfad54860362bab055a3ed4 not found: ID does not exist" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.897056 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.897094 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.897106 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c6qh\" (UniqueName: \"kubernetes.io/projected/e481a916-5eac-4504-b63b-fbef7f1ee1fe-kube-api-access-6c6qh\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.897115 4975 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e481a916-5eac-4504-b63b-fbef7f1ee1fe-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.898724 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.998150 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86380f07-beeb-467d-aabd-0927af769a35-config-data\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.998190 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86380f07-beeb-467d-aabd-0927af769a35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:33 crc kubenswrapper[4975]: I0122 10:59:33.998237 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54p9z\" (UniqueName: \"kubernetes.io/projected/86380f07-beeb-467d-aabd-0927af769a35-kube-api-access-54p9z\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.008706 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e481a916-5eac-4504-b63b-fbef7f1ee1fe" (UID: "e481a916-5eac-4504-b63b-fbef7f1ee1fe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.090898 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.101207 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86380f07-beeb-467d-aabd-0927af769a35-config-data\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.101254 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86380f07-beeb-467d-aabd-0927af769a35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.101301 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54p9z\" (UniqueName: \"kubernetes.io/projected/86380f07-beeb-467d-aabd-0927af769a35-kube-api-access-54p9z\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.101451 4975 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e481a916-5eac-4504-b63b-fbef7f1ee1fe-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.102774 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.106538 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86380f07-beeb-467d-aabd-0927af769a35-config-data\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.110952 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.112434 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.119878 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.121717 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86380f07-beeb-467d-aabd-0927af769a35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.121871 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.125470 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54p9z\" (UniqueName: \"kubernetes.io/projected/86380f07-beeb-467d-aabd-0927af769a35-kube-api-access-54p9z\") pod \"nova-scheduler-0\" (UID: \"86380f07-beeb-467d-aabd-0927af769a35\") " pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.169111 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.186987 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.202555 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjksb\" (UniqueName: \"kubernetes.io/projected/983309a9-5940-4b5f-b84d-a219bbe96d93-kube-api-access-tjksb\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.202607 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-config-data\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.202700 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.202730 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.202756 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983309a9-5940-4b5f-b84d-a219bbe96d93-logs\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.304633 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.304697 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.304728 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983309a9-5940-4b5f-b84d-a219bbe96d93-logs\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.304781 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjksb\" (UniqueName: \"kubernetes.io/projected/983309a9-5940-4b5f-b84d-a219bbe96d93-kube-api-access-tjksb\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.304804 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-config-data\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.305739 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983309a9-5940-4b5f-b84d-a219bbe96d93-logs\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.310024 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.312121 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-config-data\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.321654 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/983309a9-5940-4b5f-b84d-a219bbe96d93-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.322607 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjksb\" (UniqueName: \"kubernetes.io/projected/983309a9-5940-4b5f-b84d-a219bbe96d93-kube-api-access-tjksb\") pod \"nova-metadata-0\" (UID: \"983309a9-5940-4b5f-b84d-a219bbe96d93\") " pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.450246 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:59:34 crc kubenswrapper[4975]: W0122 10:59:34.652455 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86380f07_beeb_467d_aabd_0927af769a35.slice/crio-4c06c9a473cab50e04f5beb902257a277fd8a20b294634692f872c472ffe9bdc WatchSource:0}: Error finding container 4c06c9a473cab50e04f5beb902257a277fd8a20b294634692f872c472ffe9bdc: Status 404 returned error can't find the container with id 4c06c9a473cab50e04f5beb902257a277fd8a20b294634692f872c472ffe9bdc Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.658140 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.765472 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86380f07-beeb-467d-aabd-0927af769a35","Type":"ContainerStarted","Data":"4c06c9a473cab50e04f5beb902257a277fd8a20b294634692f872c472ffe9bdc"} Jan 22 10:59:34 crc kubenswrapper[4975]: I0122 10:59:34.877462 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.767388 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e45d19-2f50-4b5b-96b5-d5ac9601938d" path="/var/lib/kubelet/pods/a7e45d19-2f50-4b5b-96b5-d5ac9601938d/volumes" Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.768223 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e481a916-5eac-4504-b63b-fbef7f1ee1fe" path="/var/lib/kubelet/pods/e481a916-5eac-4504-b63b-fbef7f1ee1fe/volumes" Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.776932 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86380f07-beeb-467d-aabd-0927af769a35","Type":"ContainerStarted","Data":"8e36c2f960b0c3837a248faa65163b7e3cf146d542fb8fe3ceec455f13fc24a3"} Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.778693 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"983309a9-5940-4b5f-b84d-a219bbe96d93","Type":"ContainerStarted","Data":"b57296f75c82410d09820ccdac29a9c738ddae9a1bd16b6b48c83c4fecf3ffef"} Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.778741 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"983309a9-5940-4b5f-b84d-a219bbe96d93","Type":"ContainerStarted","Data":"8644a37a9f44d2884243158acd7a18185104ddacf86fa375f87237fe469a017e"} Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.778757 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"983309a9-5940-4b5f-b84d-a219bbe96d93","Type":"ContainerStarted","Data":"dc89bd4dfab94b71f81395073b46f8d33a095ed20b55d26997f2635989c75701"} Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.806847 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.806808173 podStartE2EDuration="2.806808173s" podCreationTimestamp="2026-01-22 10:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:35.795253651 +0000 UTC m=+1368.453289761" watchObservedRunningTime="2026-01-22 10:59:35.806808173 +0000 UTC m=+1368.464844293" Jan 22 10:59:35 crc kubenswrapper[4975]: I0122 10:59:35.829557 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8295357060000002 podStartE2EDuration="1.829535706s" podCreationTimestamp="2026-01-22 10:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:59:35.822648291 +0000 UTC m=+1368.480684391" watchObservedRunningTime="2026-01-22 10:59:35.829535706 +0000 UTC m=+1368.487571816" Jan 22 10:59:39 crc kubenswrapper[4975]: I0122 10:59:39.188509 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 10:59:39 crc kubenswrapper[4975]: I0122 10:59:39.453165 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:59:39 crc kubenswrapper[4975]: I0122 10:59:39.453568 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:59:40 crc kubenswrapper[4975]: I0122 10:59:40.421506 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:59:40 crc kubenswrapper[4975]: I0122 10:59:40.421600 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:59:41 crc kubenswrapper[4975]: I0122 10:59:41.440635 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff6252b0-1c69-4710-9fd2-ab50960f3495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:59:41 crc kubenswrapper[4975]: I0122 10:59:41.440762 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff6252b0-1c69-4710-9fd2-ab50960f3495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:59:44 crc kubenswrapper[4975]: I0122 10:59:44.187689 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 10:59:44 crc kubenswrapper[4975]: I0122 10:59:44.225114 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 10:59:44 crc kubenswrapper[4975]: I0122 10:59:44.453254 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:59:44 crc kubenswrapper[4975]: I0122 10:59:44.453315 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:59:44 crc kubenswrapper[4975]: I0122 10:59:44.894310 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 10:59:45 crc kubenswrapper[4975]: I0122 10:59:45.465501 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="983309a9-5940-4b5f-b84d-a219bbe96d93" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:59:45 crc kubenswrapper[4975]: I0122 10:59:45.465521 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="983309a9-5940-4b5f-b84d-a219bbe96d93" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:59:48 crc kubenswrapper[4975]: I0122 10:59:48.922776 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 10:59:50 crc kubenswrapper[4975]: I0122 10:59:50.427664 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:59:50 crc kubenswrapper[4975]: I0122 10:59:50.428359 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:59:50 crc kubenswrapper[4975]: I0122 10:59:50.430898 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:59:50 crc kubenswrapper[4975]: I0122 10:59:50.441765 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:59:50 crc kubenswrapper[4975]: I0122 10:59:50.938003 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:59:50 crc kubenswrapper[4975]: I0122 10:59:50.943912 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:59:54 crc kubenswrapper[4975]: I0122 10:59:54.460322 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:59:54 crc kubenswrapper[4975]: I0122 10:59:54.462360 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:59:54 crc kubenswrapper[4975]: I0122 10:59:54.466620 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:59:54 crc kubenswrapper[4975]: I0122 10:59:54.980790 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:59:57 crc kubenswrapper[4975]: I0122 10:59:57.436226 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:59:57 crc kubenswrapper[4975]: I0122 10:59:57.436548 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:59:57 crc kubenswrapper[4975]: I0122 10:59:57.436620 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 10:59:57 crc kubenswrapper[4975]: I0122 10:59:57.437167 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16cfad28f24f35696e7c92e7ab98650eab8e998609b5fcdbdad61136fb9627fd"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:59:57 crc kubenswrapper[4975]: I0122 10:59:57.437223 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://16cfad28f24f35696e7c92e7ab98650eab8e998609b5fcdbdad61136fb9627fd" gracePeriod=600 Jan 22 10:59:58 crc kubenswrapper[4975]: I0122 10:59:58.013312 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="16cfad28f24f35696e7c92e7ab98650eab8e998609b5fcdbdad61136fb9627fd" exitCode=0 Jan 22 10:59:58 crc kubenswrapper[4975]: I0122 10:59:58.013695 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"16cfad28f24f35696e7c92e7ab98650eab8e998609b5fcdbdad61136fb9627fd"} Jan 22 10:59:58 crc kubenswrapper[4975]: I0122 10:59:58.013865 4975 scope.go:117] "RemoveContainer" containerID="96a4c784d660fbbfb3a8a9514335df4fa2fd66b35e30bd92a673d8e26178170f" Jan 22 10:59:59 crc kubenswrapper[4975]: I0122 10:59:59.027707 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398"} Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.170223 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr"] Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.174630 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.183418 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr"] Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.192326 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.192782 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.295261 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-config-volume\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.295763 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-secret-volume\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.295807 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcg9p\" (UniqueName: \"kubernetes.io/projected/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-kube-api-access-fcg9p\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.397153 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-secret-volume\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.397200 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcg9p\" (UniqueName: \"kubernetes.io/projected/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-kube-api-access-fcg9p\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.397262 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-config-volume\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.399043 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-config-volume\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.403475 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-secret-volume\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.414215 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcg9p\" (UniqueName: \"kubernetes.io/projected/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-kube-api-access-fcg9p\") pod \"collect-profiles-29484660-vrhzr\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.518277 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:00 crc kubenswrapper[4975]: I0122 11:00:00.980982 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr"] Jan 22 11:00:00 crc kubenswrapper[4975]: W0122 11:00:00.989976 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d11385c_ca1a_4cf2_b9e3_e7c9000de99c.slice/crio-b5bbc542315c0f78044409fa34d9dee1fa13e7d05f6b986720176d01e34c4ac8 WatchSource:0}: Error finding container b5bbc542315c0f78044409fa34d9dee1fa13e7d05f6b986720176d01e34c4ac8: Status 404 returned error can't find the container with id b5bbc542315c0f78044409fa34d9dee1fa13e7d05f6b986720176d01e34c4ac8 Jan 22 11:00:01 crc kubenswrapper[4975]: I0122 11:00:01.048294 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" event={"ID":"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c","Type":"ContainerStarted","Data":"b5bbc542315c0f78044409fa34d9dee1fa13e7d05f6b986720176d01e34c4ac8"} Jan 22 11:00:02 crc kubenswrapper[4975]: I0122 11:00:02.063193 4975 generic.go:334] "Generic (PLEG): container finished" podID="1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" containerID="c79523c64b9dd36984010a8b02ab3aed6285c748d6264bc929aa022d06dce4f6" exitCode=0 Jan 22 11:00:02 crc kubenswrapper[4975]: I0122 11:00:02.063313 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" event={"ID":"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c","Type":"ContainerDied","Data":"c79523c64b9dd36984010a8b02ab3aed6285c748d6264bc929aa022d06dce4f6"} Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.297360 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.442304 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.466180 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-secret-volume\") pod \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.466249 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcg9p\" (UniqueName: \"kubernetes.io/projected/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-kube-api-access-fcg9p\") pod \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.466402 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-config-volume\") pod \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\" (UID: \"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c\") " Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.467080 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" (UID: "1d11385c-ca1a-4cf2-b9e3-e7c9000de99c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.472016 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-kube-api-access-fcg9p" (OuterVolumeSpecName: "kube-api-access-fcg9p") pod "1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" (UID: "1d11385c-ca1a-4cf2-b9e3-e7c9000de99c"). InnerVolumeSpecName "kube-api-access-fcg9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.472617 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" (UID: "1d11385c-ca1a-4cf2-b9e3-e7c9000de99c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.568124 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.568152 4975 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:03 crc kubenswrapper[4975]: I0122 11:00:03.568163 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcg9p\" (UniqueName: \"kubernetes.io/projected/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c-kube-api-access-fcg9p\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:04 crc kubenswrapper[4975]: I0122 11:00:04.088202 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" event={"ID":"1d11385c-ca1a-4cf2-b9e3-e7c9000de99c","Type":"ContainerDied","Data":"b5bbc542315c0f78044409fa34d9dee1fa13e7d05f6b986720176d01e34c4ac8"} Jan 22 11:00:04 crc kubenswrapper[4975]: I0122 11:00:04.088261 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5bbc542315c0f78044409fa34d9dee1fa13e7d05f6b986720176d01e34c4ac8" Jan 22 11:00:04 crc kubenswrapper[4975]: I0122 11:00:04.088284 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr" Jan 22 11:00:04 crc kubenswrapper[4975]: I0122 11:00:04.292575 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.302251 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66vxg"] Jan 22 11:00:05 crc kubenswrapper[4975]: E0122 11:00:05.303079 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" containerName="collect-profiles" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.303097 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" containerName="collect-profiles" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.303354 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" containerName="collect-profiles" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.335639 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.358648 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66vxg"] Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.401516 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-catalog-content\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.401661 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvrn\" (UniqueName: \"kubernetes.io/projected/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-kube-api-access-5fvrn\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.401708 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-utilities\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.503595 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-catalog-content\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.504210 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-catalog-content\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.504290 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvrn\" (UniqueName: \"kubernetes.io/projected/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-kube-api-access-5fvrn\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.504325 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-utilities\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.504900 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-utilities\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.524558 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvrn\" (UniqueName: \"kubernetes.io/projected/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-kube-api-access-5fvrn\") pod \"redhat-operators-66vxg\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:05 crc kubenswrapper[4975]: I0122 11:00:05.676076 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:07 crc kubenswrapper[4975]: I0122 11:00:06.430689 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66vxg"] Jan 22 11:00:07 crc kubenswrapper[4975]: I0122 11:00:07.113941 4975 generic.go:334] "Generic (PLEG): container finished" podID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerID="cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418" exitCode=0 Jan 22 11:00:07 crc kubenswrapper[4975]: I0122 11:00:07.114024 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerDied","Data":"cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418"} Jan 22 11:00:07 crc kubenswrapper[4975]: I0122 11:00:07.114155 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerStarted","Data":"b003b2b4cba1a24d0cb87080f26a3b97cc27e63487e8f48101eb9ad52bbfc157"} Jan 22 11:00:07 crc kubenswrapper[4975]: I0122 11:00:07.710620 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="rabbitmq" containerID="cri-o://744307bfc9991ae8786e9736a8a895cc0d7376574525d240d42e905082fe5a8c" gracePeriod=604796 Jan 22 11:00:08 crc kubenswrapper[4975]: I0122 11:00:08.715401 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="rabbitmq" containerID="cri-o://ecd6933674a3deda52cfee43ffd08f5bd0aabd7a94ccdfb387eb104a02fc2dba" gracePeriod=604796 Jan 22 11:00:12 crc kubenswrapper[4975]: I0122 11:00:12.181993 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerStarted","Data":"9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec"} Jan 22 11:00:13 crc kubenswrapper[4975]: I0122 11:00:13.029089 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 22 11:00:13 crc kubenswrapper[4975]: I0122 11:00:13.127931 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 22 11:00:16 crc kubenswrapper[4975]: I0122 11:00:16.226318 4975 generic.go:334] "Generic (PLEG): container finished" podID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerID="9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec" exitCode=0 Jan 22 11:00:16 crc kubenswrapper[4975]: I0122 11:00:16.226437 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerDied","Data":"9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec"} Jan 22 11:00:18 crc kubenswrapper[4975]: I0122 11:00:18.253386 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerStarted","Data":"4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780"} Jan 22 11:00:19 crc kubenswrapper[4975]: I0122 11:00:19.295296 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66vxg" podStartSLOduration=4.467804496 podStartE2EDuration="14.29527354s" podCreationTimestamp="2026-01-22 11:00:05 +0000 UTC" firstStartedPulling="2026-01-22 11:00:08.12695055 +0000 UTC m=+1400.784986660" lastFinishedPulling="2026-01-22 11:00:17.954419594 +0000 UTC m=+1410.612455704" observedRunningTime="2026-01-22 11:00:19.285614269 +0000 UTC m=+1411.943650419" watchObservedRunningTime="2026-01-22 11:00:19.29527354 +0000 UTC m=+1411.953309650" Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.028050 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.127851 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.827469 4975 generic.go:334] "Generic (PLEG): container finished" podID="be23c045-c637-4832-8944-ed6250b0d8b3" containerID="744307bfc9991ae8786e9736a8a895cc0d7376574525d240d42e905082fe5a8c" exitCode=0 Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.827559 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"be23c045-c637-4832-8944-ed6250b0d8b3","Type":"ContainerDied","Data":"744307bfc9991ae8786e9736a8a895cc0d7376574525d240d42e905082fe5a8c"} Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.833175 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-q5f65"] Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.834964 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.838674 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.839155 4975 generic.go:334] "Generic (PLEG): container finished" podID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerID="ecd6933674a3deda52cfee43ffd08f5bd0aabd7a94ccdfb387eb104a02fc2dba" exitCode=0 Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.839200 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9","Type":"ContainerDied","Data":"ecd6933674a3deda52cfee43ffd08f5bd0aabd7a94ccdfb387eb104a02fc2dba"} Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.864200 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-q5f65"] Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.919504 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:23 crc kubenswrapper[4975]: I0122 11:00:23.926116 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016158 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-plugins-conf\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016227 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-config-data\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016252 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-confd\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016271 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-pod-info\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016291 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016309 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016390 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-erlang-cookie\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016427 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-plugins-conf\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016497 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-erlang-cookie\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016517 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-tls\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016548 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-tls\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016567 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be23c045-c637-4832-8944-ed6250b0d8b3-erlang-cookie-secret\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016590 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-erlang-cookie-secret\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016623 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-plugins\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016668 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5svp\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-kube-api-access-s5svp\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016697 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tdvs\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-kube-api-access-4tdvs\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016724 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-config-data\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016742 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-server-conf\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016758 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-server-conf\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016809 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-plugins\") pod \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\" (UID: \"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016836 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-confd\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.016857 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be23c045-c637-4832-8944-ed6250b0d8b3-pod-info\") pod \"be23c045-c637-4832-8944-ed6250b0d8b3\" (UID: \"be23c045-c637-4832-8944-ed6250b0d8b3\") " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017054 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017092 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017132 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-config\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017233 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knfqk\" (UniqueName: \"kubernetes.io/projected/0062ae04-c67b-4658-8170-40e10b2f579c-kube-api-access-knfqk\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017312 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017350 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017375 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.017920 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.019769 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.020722 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.027534 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-kube-api-access-s5svp" (OuterVolumeSpecName: "kube-api-access-s5svp") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "kube-api-access-s5svp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.028268 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.029304 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.031525 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/be23c045-c637-4832-8944-ed6250b0d8b3-pod-info" (OuterVolumeSpecName: "pod-info") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.031923 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.032236 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.036510 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-pod-info" (OuterVolumeSpecName: "pod-info") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.036925 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.037585 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be23c045-c637-4832-8944-ed6250b0d8b3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.038032 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.043438 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-kube-api-access-4tdvs" (OuterVolumeSpecName: "kube-api-access-4tdvs") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "kube-api-access-4tdvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.047416 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.051449 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.097534 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-config-data" (OuterVolumeSpecName: "config-data") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118625 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knfqk\" (UniqueName: \"kubernetes.io/projected/0062ae04-c67b-4658-8170-40e10b2f579c-kube-api-access-knfqk\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118741 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118761 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118784 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118811 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118838 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.118880 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-config\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119031 4975 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119061 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119080 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119094 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119119 4975 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119130 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119140 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119151 4975 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be23c045-c637-4832-8944-ed6250b0d8b3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119161 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119172 4975 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119184 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119196 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5svp\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-kube-api-access-s5svp\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119207 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tdvs\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-kube-api-access-4tdvs\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119217 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119228 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119239 4975 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be23c045-c637-4832-8944-ed6250b0d8b3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119249 4975 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119539 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119765 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.119782 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.120108 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-config\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.120371 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.121736 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-config-data" (OuterVolumeSpecName: "config-data") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.122784 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.138318 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-server-conf" (OuterVolumeSpecName: "server-conf") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.143025 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knfqk\" (UniqueName: \"kubernetes.io/projected/0062ae04-c67b-4658-8170-40e10b2f579c-kube-api-access-knfqk\") pod \"dnsmasq-dns-79bd4cc8c9-q5f65\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.161360 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.166467 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-server-conf" (OuterVolumeSpecName: "server-conf") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.174963 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.218705 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" (UID: "ac1b051e-7db1-4dc3-b807-b1a7cb058aa9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.221475 4975 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.221501 4975 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.221510 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be23c045-c637-4832-8944-ed6250b0d8b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.221519 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.221530 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.221539 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.226212 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "be23c045-c637-4832-8944-ed6250b0d8b3" (UID: "be23c045-c637-4832-8944-ed6250b0d8b3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.239089 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.324196 4975 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be23c045-c637-4832-8944-ed6250b0d8b3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.748631 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-q5f65"] Jan 22 11:00:24 crc kubenswrapper[4975]: W0122 11:00:24.752824 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0062ae04_c67b_4658_8170_40e10b2f579c.slice/crio-fcdd79914a55b2e2f55a9e7a4c3568ff87dc4f5b952c9016fbe270b82e4810bc WatchSource:0}: Error finding container fcdd79914a55b2e2f55a9e7a4c3568ff87dc4f5b952c9016fbe270b82e4810bc: Status 404 returned error can't find the container with id fcdd79914a55b2e2f55a9e7a4c3568ff87dc4f5b952c9016fbe270b82e4810bc Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.851778 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.852408 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"be23c045-c637-4832-8944-ed6250b0d8b3","Type":"ContainerDied","Data":"e605e585eae21adbd376d5b1b58533c62b7416ab61ada02bbff1f6197654bd3c"} Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.852456 4975 scope.go:117] "RemoveContainer" containerID="744307bfc9991ae8786e9736a8a895cc0d7376574525d240d42e905082fe5a8c" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.856440 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" event={"ID":"0062ae04-c67b-4658-8170-40e10b2f579c","Type":"ContainerStarted","Data":"fcdd79914a55b2e2f55a9e7a4c3568ff87dc4f5b952c9016fbe270b82e4810bc"} Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.860677 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ac1b051e-7db1-4dc3-b807-b1a7cb058aa9","Type":"ContainerDied","Data":"9a52afeea93102041ca27f13b0cac63859d3d6844540c93ef34ac5212bf4d73d"} Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.860792 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.944057 4975 scope.go:117] "RemoveContainer" containerID="b572079d5df8d0dd6fc8e50919ce231fd9afd70997e377e23ee67f64c9525350" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.975079 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.977258 4975 scope.go:117] "RemoveContainer" containerID="ecd6933674a3deda52cfee43ffd08f5bd0aabd7a94ccdfb387eb104a02fc2dba" Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.987996 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 11:00:24 crc kubenswrapper[4975]: I0122 11:00:24.999676 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.012020 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.027887 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.046044 4975 scope.go:117] "RemoveContainer" containerID="5602f3dd0b6da7296455d135a004b394e4a988770a31398c9a509358b5a9efde" Jan 22 11:00:25 crc kubenswrapper[4975]: E0122 11:00:25.051668 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="setup-container" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.051707 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="setup-container" Jan 22 11:00:25 crc kubenswrapper[4975]: E0122 11:00:25.051728 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="setup-container" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.051739 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="setup-container" Jan 22 11:00:25 crc kubenswrapper[4975]: E0122 11:00:25.051753 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="rabbitmq" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.051762 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="rabbitmq" Jan 22 11:00:25 crc kubenswrapper[4975]: E0122 11:00:25.051785 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="rabbitmq" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.051793 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="rabbitmq" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.052013 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" containerName="rabbitmq" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.052035 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" containerName="rabbitmq" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.053230 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.054349 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.056931 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.056964 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5nhpj" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.057153 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.057356 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.057376 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.057466 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.057539 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.058666 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.060207 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.061156 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.061438 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.061560 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vjxnt" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.061674 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.062393 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.065073 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.074554 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.092837 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148694 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f2d061f-103c-4b8a-aa45-3da02ffebff5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148748 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6110323c-c3aa-4d7b-a201-20d85afa29d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148778 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148799 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148828 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148917 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.148967 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149025 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-config-data\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149051 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149146 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149195 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mnf\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-kube-api-access-w8mnf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149212 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149247 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149283 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149303 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149363 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149381 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfgf\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-kube-api-access-5qfgf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149400 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f2d061f-103c-4b8a-aa45-3da02ffebff5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149417 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149434 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149478 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.149923 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6110323c-c3aa-4d7b-a201-20d85afa29d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253573 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6110323c-c3aa-4d7b-a201-20d85afa29d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253662 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f2d061f-103c-4b8a-aa45-3da02ffebff5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253711 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6110323c-c3aa-4d7b-a201-20d85afa29d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253738 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253760 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253801 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253829 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253856 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253912 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-config-data\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.253945 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254019 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254061 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mnf\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-kube-api-access-w8mnf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254085 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254120 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254155 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254180 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254237 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254266 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfgf\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-kube-api-access-5qfgf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254284 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f2d061f-103c-4b8a-aa45-3da02ffebff5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254299 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254319 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.254377 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.255216 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.255383 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.255801 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.256064 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.256634 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.257134 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6110323c-c3aa-4d7b-a201-20d85afa29d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.257319 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.257559 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.257869 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.258131 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-config-data\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.258235 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.259045 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f2d061f-103c-4b8a-aa45-3da02ffebff5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.259701 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.259728 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.259927 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6110323c-c3aa-4d7b-a201-20d85afa29d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.264929 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.265366 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f2d061f-103c-4b8a-aa45-3da02ffebff5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.265789 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6110323c-c3aa-4d7b-a201-20d85afa29d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.265852 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f2d061f-103c-4b8a-aa45-3da02ffebff5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.275229 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfgf\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-kube-api-access-5qfgf\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.283851 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f2d061f-103c-4b8a-aa45-3da02ffebff5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.284350 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mnf\" (UniqueName: \"kubernetes.io/projected/6110323c-c3aa-4d7b-a201-20d85afa29d3-kube-api-access-w8mnf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.312317 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9f2d061f-103c-4b8a-aa45-3da02ffebff5\") " pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.319871 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6110323c-c3aa-4d7b-a201-20d85afa29d3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: E0122 11:00:25.356251 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0062ae04_c67b_4658_8170_40e10b2f579c.slice/crio-conmon-998ef76a516baada9e3e91c2313ab75de5bd3cda791b0477b686bec679951973.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.382639 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.403262 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.678995 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.679324 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.770682 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1b051e-7db1-4dc3-b807-b1a7cb058aa9" path="/var/lib/kubelet/pods/ac1b051e-7db1-4dc3-b807-b1a7cb058aa9/volumes" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.771549 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be23c045-c637-4832-8944-ed6250b0d8b3" path="/var/lib/kubelet/pods/be23c045-c637-4832-8944-ed6250b0d8b3/volumes" Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.838944 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.890389 4975 generic.go:334] "Generic (PLEG): container finished" podID="0062ae04-c67b-4658-8170-40e10b2f579c" containerID="998ef76a516baada9e3e91c2313ab75de5bd3cda791b0477b686bec679951973" exitCode=0 Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.890466 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" event={"ID":"0062ae04-c67b-4658-8170-40e10b2f579c","Type":"ContainerDied","Data":"998ef76a516baada9e3e91c2313ab75de5bd3cda791b0477b686bec679951973"} Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.892890 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f2d061f-103c-4b8a-aa45-3da02ffebff5","Type":"ContainerStarted","Data":"df4f295b4e1f8c0f5211d50b5c85a360b8a9ca2247e4993c6ea68c76323c6a33"} Jan 22 11:00:25 crc kubenswrapper[4975]: I0122 11:00:25.939270 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 11:00:26 crc kubenswrapper[4975]: I0122 11:00:26.731572 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66vxg" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="registry-server" probeResult="failure" output=< Jan 22 11:00:26 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 11:00:26 crc kubenswrapper[4975]: > Jan 22 11:00:26 crc kubenswrapper[4975]: I0122 11:00:26.908658 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" event={"ID":"0062ae04-c67b-4658-8170-40e10b2f579c","Type":"ContainerStarted","Data":"381c4f7dd782f6339428750f38df20178d7d6af8d55461b3e4a0f93602a111cb"} Jan 22 11:00:26 crc kubenswrapper[4975]: I0122 11:00:26.908993 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:26 crc kubenswrapper[4975]: I0122 11:00:26.910722 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6110323c-c3aa-4d7b-a201-20d85afa29d3","Type":"ContainerStarted","Data":"e1290737327dc189b56cfb8bb52b254379430eb6fa4337235bcb5c41f7b64242"} Jan 22 11:00:26 crc kubenswrapper[4975]: I0122 11:00:26.942138 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" podStartSLOduration=3.942109098 podStartE2EDuration="3.942109098s" podCreationTimestamp="2026-01-22 11:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:00:26.93996341 +0000 UTC m=+1419.597999510" watchObservedRunningTime="2026-01-22 11:00:26.942109098 +0000 UTC m=+1419.600145238" Jan 22 11:00:27 crc kubenswrapper[4975]: I0122 11:00:27.919735 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6110323c-c3aa-4d7b-a201-20d85afa29d3","Type":"ContainerStarted","Data":"f61afe4d627db3ce481bd5a39c3c3661c357cd9af8edefc5e894e0a058051111"} Jan 22 11:00:27 crc kubenswrapper[4975]: I0122 11:00:27.922249 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f2d061f-103c-4b8a-aa45-3da02ffebff5","Type":"ContainerStarted","Data":"8d55797080fa9b39190e7a23a4eaf101b8099c792c42a444cd46c860ea0e3d23"} Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.242570 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.312998 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-bp68q"] Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.313230 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="dnsmasq-dns" containerID="cri-o://ff6e45dfd32a446678f9bac33e1c9b7153a7ade6fe4fd2c771fc91296344d7b5" gracePeriod=10 Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.500853 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-j5kkw"] Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.502616 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.526842 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-j5kkw"] Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548477 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-dns-svc\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548529 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548600 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548637 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-config\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548668 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548724 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.548756 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnncx\" (UniqueName: \"kubernetes.io/projected/2266f8c0-fa93-40b2-994b-90c948a2a2f8-kube-api-access-cnncx\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651383 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651428 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnncx\" (UniqueName: \"kubernetes.io/projected/2266f8c0-fa93-40b2-994b-90c948a2a2f8-kube-api-access-cnncx\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651506 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-dns-svc\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651529 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651581 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651606 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-config\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.651636 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.652485 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.653025 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.653868 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-dns-svc\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.654304 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.654473 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-config\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.655919 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2266f8c0-fa93-40b2-994b-90c948a2a2f8-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.677920 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnncx\" (UniqueName: \"kubernetes.io/projected/2266f8c0-fa93-40b2-994b-90c948a2a2f8-kube-api-access-cnncx\") pod \"dnsmasq-dns-55478c4467-j5kkw\" (UID: \"2266f8c0-fa93-40b2-994b-90c948a2a2f8\") " pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:34 crc kubenswrapper[4975]: I0122 11:00:34.892504 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.009910 4975 generic.go:334] "Generic (PLEG): container finished" podID="609a3918-aa87-49ed-818b-bcba858623a2" containerID="ff6e45dfd32a446678f9bac33e1c9b7153a7ade6fe4fd2c771fc91296344d7b5" exitCode=0 Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.010214 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" event={"ID":"609a3918-aa87-49ed-818b-bcba858623a2","Type":"ContainerDied","Data":"ff6e45dfd32a446678f9bac33e1c9b7153a7ade6fe4fd2c771fc91296344d7b5"} Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.010241 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" event={"ID":"609a3918-aa87-49ed-818b-bcba858623a2","Type":"ContainerDied","Data":"4f03837ab15b7fcede43f28bf5cef87caba2df8c722b7d6970349ec7db368a5c"} Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.010253 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f03837ab15b7fcede43f28bf5cef87caba2df8c722b7d6970349ec7db368a5c" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.010668 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.058625 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-nb\") pod \"609a3918-aa87-49ed-818b-bcba858623a2\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.058697 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tltg5\" (UniqueName: \"kubernetes.io/projected/609a3918-aa87-49ed-818b-bcba858623a2-kube-api-access-tltg5\") pod \"609a3918-aa87-49ed-818b-bcba858623a2\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.058751 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-swift-storage-0\") pod \"609a3918-aa87-49ed-818b-bcba858623a2\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.058806 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-config\") pod \"609a3918-aa87-49ed-818b-bcba858623a2\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.059409 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-svc\") pod \"609a3918-aa87-49ed-818b-bcba858623a2\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.059465 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-sb\") pod \"609a3918-aa87-49ed-818b-bcba858623a2\" (UID: \"609a3918-aa87-49ed-818b-bcba858623a2\") " Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.065020 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609a3918-aa87-49ed-818b-bcba858623a2-kube-api-access-tltg5" (OuterVolumeSpecName: "kube-api-access-tltg5") pod "609a3918-aa87-49ed-818b-bcba858623a2" (UID: "609a3918-aa87-49ed-818b-bcba858623a2"). InnerVolumeSpecName "kube-api-access-tltg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.150228 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "609a3918-aa87-49ed-818b-bcba858623a2" (UID: "609a3918-aa87-49ed-818b-bcba858623a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.151405 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "609a3918-aa87-49ed-818b-bcba858623a2" (UID: "609a3918-aa87-49ed-818b-bcba858623a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.162057 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tltg5\" (UniqueName: \"kubernetes.io/projected/609a3918-aa87-49ed-818b-bcba858623a2-kube-api-access-tltg5\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.162104 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.162117 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.163827 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "609a3918-aa87-49ed-818b-bcba858623a2" (UID: "609a3918-aa87-49ed-818b-bcba858623a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.164021 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "609a3918-aa87-49ed-818b-bcba858623a2" (UID: "609a3918-aa87-49ed-818b-bcba858623a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.166137 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-config" (OuterVolumeSpecName: "config") pod "609a3918-aa87-49ed-818b-bcba858623a2" (UID: "609a3918-aa87-49ed-818b-bcba858623a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:35 crc kubenswrapper[4975]: W0122 11:00:35.170471 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2266f8c0_fa93_40b2_994b_90c948a2a2f8.slice/crio-75fd099a68e6737da46eb3152c0f6e0854e4b6e3290aa5d53cc8f85e18f8c53b WatchSource:0}: Error finding container 75fd099a68e6737da46eb3152c0f6e0854e4b6e3290aa5d53cc8f85e18f8c53b: Status 404 returned error can't find the container with id 75fd099a68e6737da46eb3152c0f6e0854e4b6e3290aa5d53cc8f85e18f8c53b Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.172514 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-j5kkw"] Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.263743 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.263773 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.263783 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/609a3918-aa87-49ed-818b-bcba858623a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:35 crc kubenswrapper[4975]: E0122 11:00:35.576386 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2266f8c0_fa93_40b2_994b_90c948a2a2f8.slice/crio-f95b79e7dc888df9c9919b07c88bf4babdc60de73b1b918148e37a3178465a74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2266f8c0_fa93_40b2_994b_90c948a2a2f8.slice/crio-conmon-f95b79e7dc888df9c9919b07c88bf4babdc60de73b1b918148e37a3178465a74.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.728753 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:35 crc kubenswrapper[4975]: I0122 11:00:35.801801 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:36 crc kubenswrapper[4975]: I0122 11:00:36.021845 4975 generic.go:334] "Generic (PLEG): container finished" podID="2266f8c0-fa93-40b2-994b-90c948a2a2f8" containerID="f95b79e7dc888df9c9919b07c88bf4babdc60de73b1b918148e37a3178465a74" exitCode=0 Jan 22 11:00:36 crc kubenswrapper[4975]: I0122 11:00:36.021979 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" event={"ID":"2266f8c0-fa93-40b2-994b-90c948a2a2f8","Type":"ContainerDied","Data":"f95b79e7dc888df9c9919b07c88bf4babdc60de73b1b918148e37a3178465a74"} Jan 22 11:00:36 crc kubenswrapper[4975]: I0122 11:00:36.022227 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" event={"ID":"2266f8c0-fa93-40b2-994b-90c948a2a2f8","Type":"ContainerStarted","Data":"75fd099a68e6737da46eb3152c0f6e0854e4b6e3290aa5d53cc8f85e18f8c53b"} Jan 22 11:00:36 crc kubenswrapper[4975]: I0122 11:00:36.022244 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" Jan 22 11:00:36 crc kubenswrapper[4975]: I0122 11:00:36.067959 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-bp68q"] Jan 22 11:00:36 crc kubenswrapper[4975]: I0122 11:00:36.089326 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-bp68q"] Jan 22 11:00:37 crc kubenswrapper[4975]: I0122 11:00:37.033678 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" event={"ID":"2266f8c0-fa93-40b2-994b-90c948a2a2f8","Type":"ContainerStarted","Data":"06cd030ea16afa33e27312bc92a328c11424e7425745acbf6167c789274899c8"} Jan 22 11:00:37 crc kubenswrapper[4975]: I0122 11:00:37.034223 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:37 crc kubenswrapper[4975]: I0122 11:00:37.057727 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" podStartSLOduration=3.057702361 podStartE2EDuration="3.057702361s" podCreationTimestamp="2026-01-22 11:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:00:37.051687748 +0000 UTC m=+1429.709723898" watchObservedRunningTime="2026-01-22 11:00:37.057702361 +0000 UTC m=+1429.715738481" Jan 22 11:00:37 crc kubenswrapper[4975]: I0122 11:00:37.774932 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609a3918-aa87-49ed-818b-bcba858623a2" path="/var/lib/kubelet/pods/609a3918-aa87-49ed-818b-bcba858623a2/volumes" Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.436446 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66vxg"] Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.436901 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66vxg" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="registry-server" containerID="cri-o://4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780" gracePeriod=2 Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.883704 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.934866 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-utilities\") pod \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.934971 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvrn\" (UniqueName: \"kubernetes.io/projected/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-kube-api-access-5fvrn\") pod \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.935104 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-catalog-content\") pod \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\" (UID: \"1788d398-e8d9-4e64-875a-a3a2d5c70f2e\") " Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.936668 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-utilities" (OuterVolumeSpecName: "utilities") pod "1788d398-e8d9-4e64-875a-a3a2d5c70f2e" (UID: "1788d398-e8d9-4e64-875a-a3a2d5c70f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:00:38 crc kubenswrapper[4975]: I0122 11:00:38.941575 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-kube-api-access-5fvrn" (OuterVolumeSpecName: "kube-api-access-5fvrn") pod "1788d398-e8d9-4e64-875a-a3a2d5c70f2e" (UID: "1788d398-e8d9-4e64-875a-a3a2d5c70f2e"). InnerVolumeSpecName "kube-api-access-5fvrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.037416 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.037449 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fvrn\" (UniqueName: \"kubernetes.io/projected/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-kube-api-access-5fvrn\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.052333 4975 generic.go:334] "Generic (PLEG): container finished" podID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerID="4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780" exitCode=0 Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.052420 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerDied","Data":"4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780"} Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.052413 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66vxg" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.052452 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66vxg" event={"ID":"1788d398-e8d9-4e64-875a-a3a2d5c70f2e","Type":"ContainerDied","Data":"b003b2b4cba1a24d0cb87080f26a3b97cc27e63487e8f48101eb9ad52bbfc157"} Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.052472 4975 scope.go:117] "RemoveContainer" containerID="4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.067984 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1788d398-e8d9-4e64-875a-a3a2d5c70f2e" (UID: "1788d398-e8d9-4e64-875a-a3a2d5c70f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.075629 4975 scope.go:117] "RemoveContainer" containerID="9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.103915 4975 scope.go:117] "RemoveContainer" containerID="cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.139248 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1788d398-e8d9-4e64-875a-a3a2d5c70f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.148669 4975 scope.go:117] "RemoveContainer" containerID="4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780" Jan 22 11:00:39 crc kubenswrapper[4975]: E0122 11:00:39.149175 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780\": container with ID starting with 4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780 not found: ID does not exist" containerID="4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.149218 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780"} err="failed to get container status \"4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780\": rpc error: code = NotFound desc = could not find container \"4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780\": container with ID starting with 4920de37502634d423dd7ccba6f4da4a452de614acc92caace3a2bcab2e93780 not found: ID does not exist" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.149243 4975 scope.go:117] "RemoveContainer" containerID="9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec" Jan 22 11:00:39 crc kubenswrapper[4975]: E0122 11:00:39.149601 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec\": container with ID starting with 9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec not found: ID does not exist" containerID="9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.149634 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec"} err="failed to get container status \"9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec\": rpc error: code = NotFound desc = could not find container \"9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec\": container with ID starting with 9095752fbae588aebce3760758157a32fc0382f160d39d8acfc0d7ea3aa2d5ec not found: ID does not exist" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.149657 4975 scope.go:117] "RemoveContainer" containerID="cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418" Jan 22 11:00:39 crc kubenswrapper[4975]: E0122 11:00:39.149978 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418\": container with ID starting with cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418 not found: ID does not exist" containerID="cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.150000 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418"} err="failed to get container status \"cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418\": rpc error: code = NotFound desc = could not find container \"cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418\": container with ID starting with cc82c37df4d0d4bb15143b1ac8c6980dacb87f148dc44878c8df32449bd34418 not found: ID does not exist" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.418352 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66vxg"] Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.434658 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66vxg"] Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.767506 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" path="/var/lib/kubelet/pods/1788d398-e8d9-4e64-875a-a3a2d5c70f2e/volumes" Jan 22 11:00:39 crc kubenswrapper[4975]: I0122 11:00:39.821619 4975 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-bp68q" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 22 11:00:44 crc kubenswrapper[4975]: I0122 11:00:44.894601 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-j5kkw" Jan 22 11:00:44 crc kubenswrapper[4975]: I0122 11:00:44.979100 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-q5f65"] Jan 22 11:00:44 crc kubenswrapper[4975]: I0122 11:00:44.979511 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" containerName="dnsmasq-dns" containerID="cri-o://381c4f7dd782f6339428750f38df20178d7d6af8d55461b3e4a0f93602a111cb" gracePeriod=10 Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.164567 4975 generic.go:334] "Generic (PLEG): container finished" podID="0062ae04-c67b-4658-8170-40e10b2f579c" containerID="381c4f7dd782f6339428750f38df20178d7d6af8d55461b3e4a0f93602a111cb" exitCode=0 Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.164620 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" event={"ID":"0062ae04-c67b-4658-8170-40e10b2f579c","Type":"ContainerDied","Data":"381c4f7dd782f6339428750f38df20178d7d6af8d55461b3e4a0f93602a111cb"} Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.462579 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581038 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-nb\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581223 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knfqk\" (UniqueName: \"kubernetes.io/projected/0062ae04-c67b-4658-8170-40e10b2f579c-kube-api-access-knfqk\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581244 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-sb\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581264 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-svc\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581301 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-config\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581365 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-swift-storage-0\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.581402 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-openstack-edpm-ipam\") pod \"0062ae04-c67b-4658-8170-40e10b2f579c\" (UID: \"0062ae04-c67b-4658-8170-40e10b2f579c\") " Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.596365 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0062ae04-c67b-4658-8170-40e10b2f579c-kube-api-access-knfqk" (OuterVolumeSpecName: "kube-api-access-knfqk") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "kube-api-access-knfqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.637692 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.639176 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-config" (OuterVolumeSpecName: "config") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.642489 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.646426 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.654220 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.661250 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0062ae04-c67b-4658-8170-40e10b2f579c" (UID: "0062ae04-c67b-4658-8170-40e10b2f579c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683425 4975 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683460 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683472 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683482 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knfqk\" (UniqueName: \"kubernetes.io/projected/0062ae04-c67b-4658-8170-40e10b2f579c-kube-api-access-knfqk\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683496 4975 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683507 4975 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:45 crc kubenswrapper[4975]: I0122 11:00:45.683516 4975 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0062ae04-c67b-4658-8170-40e10b2f579c-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:46 crc kubenswrapper[4975]: I0122 11:00:46.177362 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" event={"ID":"0062ae04-c67b-4658-8170-40e10b2f579c","Type":"ContainerDied","Data":"fcdd79914a55b2e2f55a9e7a4c3568ff87dc4f5b952c9016fbe270b82e4810bc"} Jan 22 11:00:46 crc kubenswrapper[4975]: I0122 11:00:46.178436 4975 scope.go:117] "RemoveContainer" containerID="381c4f7dd782f6339428750f38df20178d7d6af8d55461b3e4a0f93602a111cb" Jan 22 11:00:46 crc kubenswrapper[4975]: I0122 11:00:46.177446 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-q5f65" Jan 22 11:00:46 crc kubenswrapper[4975]: I0122 11:00:46.202226 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-q5f65"] Jan 22 11:00:46 crc kubenswrapper[4975]: I0122 11:00:46.209234 4975 scope.go:117] "RemoveContainer" containerID="998ef76a516baada9e3e91c2313ab75de5bd3cda791b0477b686bec679951973" Jan 22 11:00:46 crc kubenswrapper[4975]: I0122 11:00:46.210437 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-q5f65"] Jan 22 11:00:47 crc kubenswrapper[4975]: I0122 11:00:47.774701 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" path="/var/lib/kubelet/pods/0062ae04-c67b-4658-8170-40e10b2f579c/volumes" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.944325 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p"] Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945175 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="registry-server" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945186 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="registry-server" Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945198 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="extract-content" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945203 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="extract-content" Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945221 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="extract-utilities" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945227 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="extract-utilities" Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945239 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" containerName="dnsmasq-dns" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945246 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" containerName="dnsmasq-dns" Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945254 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="init" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945259 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="init" Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945269 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="dnsmasq-dns" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945276 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="dnsmasq-dns" Jan 22 11:00:57 crc kubenswrapper[4975]: E0122 11:00:57.945289 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" containerName="init" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945295 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" containerName="init" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945476 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="609a3918-aa87-49ed-818b-bcba858623a2" containerName="dnsmasq-dns" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945492 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="1788d398-e8d9-4e64-875a-a3a2d5c70f2e" containerName="registry-server" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.945506 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0062ae04-c67b-4658-8170-40e10b2f579c" containerName="dnsmasq-dns" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.946057 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.949529 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.949642 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.949775 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.949901 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:00:57 crc kubenswrapper[4975]: I0122 11:00:57.961345 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p"] Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.063844 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.063897 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994hc\" (UniqueName: \"kubernetes.io/projected/f15014da-c77d-4583-9547-dceec5d6628a-kube-api-access-994hc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.064109 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.064148 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.165976 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.166051 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.166139 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.166179 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994hc\" (UniqueName: \"kubernetes.io/projected/f15014da-c77d-4583-9547-dceec5d6628a-kube-api-access-994hc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.178220 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.178325 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.183507 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.186754 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994hc\" (UniqueName: \"kubernetes.io/projected/f15014da-c77d-4583-9547-dceec5d6628a-kube-api-access-994hc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.269903 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:00:58 crc kubenswrapper[4975]: I0122 11:00:58.791914 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p"] Jan 22 11:00:59 crc kubenswrapper[4975]: I0122 11:00:59.343707 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" event={"ID":"f15014da-c77d-4583-9547-dceec5d6628a","Type":"ContainerStarted","Data":"a7abfb8f710be6eee3db407dc3c41b23f23106d7dc47a709ad086e4fd736deb6"} Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.140683 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29484661-rhjvs"] Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.142083 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.157100 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484661-rhjvs"] Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.325718 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-combined-ca-bundle\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.326250 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-fernet-keys\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.326552 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-kube-api-access-r6rx9\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.326802 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-config-data\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.362019 4975 generic.go:334] "Generic (PLEG): container finished" podID="6110323c-c3aa-4d7b-a201-20d85afa29d3" containerID="f61afe4d627db3ce481bd5a39c3c3661c357cd9af8edefc5e894e0a058051111" exitCode=0 Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.362051 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6110323c-c3aa-4d7b-a201-20d85afa29d3","Type":"ContainerDied","Data":"f61afe4d627db3ce481bd5a39c3c3661c357cd9af8edefc5e894e0a058051111"} Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.367450 4975 generic.go:334] "Generic (PLEG): container finished" podID="9f2d061f-103c-4b8a-aa45-3da02ffebff5" containerID="8d55797080fa9b39190e7a23a4eaf101b8099c792c42a444cd46c860ea0e3d23" exitCode=0 Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.367515 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f2d061f-103c-4b8a-aa45-3da02ffebff5","Type":"ContainerDied","Data":"8d55797080fa9b39190e7a23a4eaf101b8099c792c42a444cd46c860ea0e3d23"} Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.428667 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-config-data\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.429117 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-combined-ca-bundle\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.429373 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-fernet-keys\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.429490 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-kube-api-access-r6rx9\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.436522 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-config-data\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.437109 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-combined-ca-bundle\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.438838 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-fernet-keys\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.452319 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-kube-api-access-r6rx9\") pod \"keystone-cron-29484661-rhjvs\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.541096 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:00 crc kubenswrapper[4975]: W0122 11:01:00.981107 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5fa66cb_0369_41ff_8b6f_d2c2367dfa3f.slice/crio-985c7fbec222f63039047a23d736b6e7fcdc2fc6734624ce660f930a8cddb1a9 WatchSource:0}: Error finding container 985c7fbec222f63039047a23d736b6e7fcdc2fc6734624ce660f930a8cddb1a9: Status 404 returned error can't find the container with id 985c7fbec222f63039047a23d736b6e7fcdc2fc6734624ce660f930a8cddb1a9 Jan 22 11:01:00 crc kubenswrapper[4975]: I0122 11:01:00.984296 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484661-rhjvs"] Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.380825 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6110323c-c3aa-4d7b-a201-20d85afa29d3","Type":"ContainerStarted","Data":"0de73d218bbe408ae716cf0a8c7d2413c9f8f9aa60e4a608d840e37a23f16e88"} Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.381390 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.384875 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f2d061f-103c-4b8a-aa45-3da02ffebff5","Type":"ContainerStarted","Data":"ce37a524e3120aa87de0d49a4343feb9781e9b0c9e0097095ce270ded1245a27"} Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.385089 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.386748 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-rhjvs" event={"ID":"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f","Type":"ContainerStarted","Data":"09e3aba06ee5f26457393727d53e2d26e5524e652e7cac04f50a9340cf058f94"} Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.386774 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-rhjvs" event={"ID":"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f","Type":"ContainerStarted","Data":"985c7fbec222f63039047a23d736b6e7fcdc2fc6734624ce660f930a8cddb1a9"} Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.419720 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.419699499 podStartE2EDuration="37.419699499s" podCreationTimestamp="2026-01-22 11:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:01:01.398310062 +0000 UTC m=+1454.056346182" watchObservedRunningTime="2026-01-22 11:01:01.419699499 +0000 UTC m=+1454.077735599" Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.427500 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.427469119 podStartE2EDuration="37.427469119s" podCreationTimestamp="2026-01-22 11:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:01:01.419969176 +0000 UTC m=+1454.078005286" watchObservedRunningTime="2026-01-22 11:01:01.427469119 +0000 UTC m=+1454.085505259" Jan 22 11:01:01 crc kubenswrapper[4975]: I0122 11:01:01.444700 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29484661-rhjvs" podStartSLOduration=1.4446782329999999 podStartE2EDuration="1.444678233s" podCreationTimestamp="2026-01-22 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:01:01.435486855 +0000 UTC m=+1454.093523005" watchObservedRunningTime="2026-01-22 11:01:01.444678233 +0000 UTC m=+1454.102714383" Jan 22 11:01:03 crc kubenswrapper[4975]: I0122 11:01:03.433160 4975 generic.go:334] "Generic (PLEG): container finished" podID="e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" containerID="09e3aba06ee5f26457393727d53e2d26e5524e652e7cac04f50a9340cf058f94" exitCode=0 Jan 22 11:01:03 crc kubenswrapper[4975]: I0122 11:01:03.433270 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-rhjvs" event={"ID":"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f","Type":"ContainerDied","Data":"09e3aba06ee5f26457393727d53e2d26e5524e652e7cac04f50a9340cf058f94"} Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.464110 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.514169 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-rhjvs" event={"ID":"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f","Type":"ContainerDied","Data":"985c7fbec222f63039047a23d736b6e7fcdc2fc6734624ce660f930a8cddb1a9"} Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.514212 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985c7fbec222f63039047a23d736b6e7fcdc2fc6734624ce660f930a8cddb1a9" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.514275 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-rhjvs" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.578146 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-combined-ca-bundle\") pod \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.578409 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-fernet-keys\") pod \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.578483 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-config-data\") pod \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.578625 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-kube-api-access-r6rx9\") pod \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\" (UID: \"e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f\") " Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.582496 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-kube-api-access-r6rx9" (OuterVolumeSpecName: "kube-api-access-r6rx9") pod "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" (UID: "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f"). InnerVolumeSpecName "kube-api-access-r6rx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.582694 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" (UID: "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.610288 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" (UID: "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.634528 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-config-data" (OuterVolumeSpecName: "config-data") pod "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" (UID: "e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.680292 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.680346 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-kube-api-access-r6rx9\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.680360 4975 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:08 crc kubenswrapper[4975]: I0122 11:01:08.680370 4975 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:09 crc kubenswrapper[4975]: I0122 11:01:09.522575 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" event={"ID":"f15014da-c77d-4583-9547-dceec5d6628a","Type":"ContainerStarted","Data":"59a517e66d80136ff7caa6b7a470c72803cf19167ff4a7a2431599e8979bdf75"} Jan 22 11:01:09 crc kubenswrapper[4975]: I0122 11:01:09.546151 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" podStartSLOduration=2.851639703 podStartE2EDuration="12.546134578s" podCreationTimestamp="2026-01-22 11:00:57 +0000 UTC" firstStartedPulling="2026-01-22 11:00:58.805659774 +0000 UTC m=+1451.463695884" lastFinishedPulling="2026-01-22 11:01:08.500154649 +0000 UTC m=+1461.158190759" observedRunningTime="2026-01-22 11:01:09.538905142 +0000 UTC m=+1462.196941252" watchObservedRunningTime="2026-01-22 11:01:09.546134578 +0000 UTC m=+1462.204170688" Jan 22 11:01:15 crc kubenswrapper[4975]: I0122 11:01:15.387588 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 11:01:15 crc kubenswrapper[4975]: I0122 11:01:15.408605 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 11:01:20 crc kubenswrapper[4975]: I0122 11:01:20.685520 4975 generic.go:334] "Generic (PLEG): container finished" podID="f15014da-c77d-4583-9547-dceec5d6628a" containerID="59a517e66d80136ff7caa6b7a470c72803cf19167ff4a7a2431599e8979bdf75" exitCode=0 Jan 22 11:01:20 crc kubenswrapper[4975]: I0122 11:01:20.685690 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" event={"ID":"f15014da-c77d-4583-9547-dceec5d6628a","Type":"ContainerDied","Data":"59a517e66d80136ff7caa6b7a470c72803cf19167ff4a7a2431599e8979bdf75"} Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.133196 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.146397 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-inventory\") pod \"f15014da-c77d-4583-9547-dceec5d6628a\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.146554 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-repo-setup-combined-ca-bundle\") pod \"f15014da-c77d-4583-9547-dceec5d6628a\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.146631 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-ssh-key-openstack-edpm-ipam\") pod \"f15014da-c77d-4583-9547-dceec5d6628a\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.146790 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994hc\" (UniqueName: \"kubernetes.io/projected/f15014da-c77d-4583-9547-dceec5d6628a-kube-api-access-994hc\") pod \"f15014da-c77d-4583-9547-dceec5d6628a\" (UID: \"f15014da-c77d-4583-9547-dceec5d6628a\") " Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.195276 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f15014da-c77d-4583-9547-dceec5d6628a" (UID: "f15014da-c77d-4583-9547-dceec5d6628a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.196659 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15014da-c77d-4583-9547-dceec5d6628a-kube-api-access-994hc" (OuterVolumeSpecName: "kube-api-access-994hc") pod "f15014da-c77d-4583-9547-dceec5d6628a" (UID: "f15014da-c77d-4583-9547-dceec5d6628a"). InnerVolumeSpecName "kube-api-access-994hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.198740 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-inventory" (OuterVolumeSpecName: "inventory") pod "f15014da-c77d-4583-9547-dceec5d6628a" (UID: "f15014da-c77d-4583-9547-dceec5d6628a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.202507 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f15014da-c77d-4583-9547-dceec5d6628a" (UID: "f15014da-c77d-4583-9547-dceec5d6628a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.249123 4975 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.249179 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.249192 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994hc\" (UniqueName: \"kubernetes.io/projected/f15014da-c77d-4583-9547-dceec5d6628a-kube-api-access-994hc\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.249208 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f15014da-c77d-4583-9547-dceec5d6628a-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.707819 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" event={"ID":"f15014da-c77d-4583-9547-dceec5d6628a","Type":"ContainerDied","Data":"a7abfb8f710be6eee3db407dc3c41b23f23106d7dc47a709ad086e4fd736deb6"} Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.708315 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7abfb8f710be6eee3db407dc3c41b23f23106d7dc47a709ad086e4fd736deb6" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.707932 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.820123 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw"] Jan 22 11:01:22 crc kubenswrapper[4975]: E0122 11:01:22.820964 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" containerName="keystone-cron" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.821006 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" containerName="keystone-cron" Jan 22 11:01:22 crc kubenswrapper[4975]: E0122 11:01:22.821054 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15014da-c77d-4583-9547-dceec5d6628a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.821076 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15014da-c77d-4583-9547-dceec5d6628a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.821563 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f" containerName="keystone-cron" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.821628 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15014da-c77d-4583-9547-dceec5d6628a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.822977 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.826280 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.826686 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.826979 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.827464 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.852781 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw"] Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.964767 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.964931 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qsx\" (UniqueName: \"kubernetes.io/projected/d6c907cc-684d-40a8-905a-977845138321-kube-api-access-w2qsx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:22 crc kubenswrapper[4975]: I0122 11:01:22.965050 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.066658 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.066786 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2qsx\" (UniqueName: \"kubernetes.io/projected/d6c907cc-684d-40a8-905a-977845138321-kube-api-access-w2qsx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.066862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.084449 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.092027 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.093323 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2qsx\" (UniqueName: \"kubernetes.io/projected/d6c907cc-684d-40a8-905a-977845138321-kube-api-access-w2qsx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8v6sw\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.144115 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.689701 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw"] Jan 22 11:01:23 crc kubenswrapper[4975]: W0122 11:01:23.690114 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6c907cc_684d_40a8_905a_977845138321.slice/crio-722cf3dd4690d7da5645edd05b8c7f9e69a0bdfdd95bd48d1db2c72c3562c5a9 WatchSource:0}: Error finding container 722cf3dd4690d7da5645edd05b8c7f9e69a0bdfdd95bd48d1db2c72c3562c5a9: Status 404 returned error can't find the container with id 722cf3dd4690d7da5645edd05b8c7f9e69a0bdfdd95bd48d1db2c72c3562c5a9 Jan 22 11:01:23 crc kubenswrapper[4975]: I0122 11:01:23.719877 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" event={"ID":"d6c907cc-684d-40a8-905a-977845138321","Type":"ContainerStarted","Data":"722cf3dd4690d7da5645edd05b8c7f9e69a0bdfdd95bd48d1db2c72c3562c5a9"} Jan 22 11:01:24 crc kubenswrapper[4975]: I0122 11:01:24.732557 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" event={"ID":"d6c907cc-684d-40a8-905a-977845138321","Type":"ContainerStarted","Data":"42fa6f0f6705faefedde9c58bc9504aaa5b18846954a290dd491a84ea888ab06"} Jan 22 11:01:24 crc kubenswrapper[4975]: I0122 11:01:24.752765 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" podStartSLOduration=2.218642985 podStartE2EDuration="2.752737419s" podCreationTimestamp="2026-01-22 11:01:22 +0000 UTC" firstStartedPulling="2026-01-22 11:01:23.694023478 +0000 UTC m=+1476.352059588" lastFinishedPulling="2026-01-22 11:01:24.228117902 +0000 UTC m=+1476.886154022" observedRunningTime="2026-01-22 11:01:24.748061403 +0000 UTC m=+1477.406097513" watchObservedRunningTime="2026-01-22 11:01:24.752737419 +0000 UTC m=+1477.410773549" Jan 22 11:01:27 crc kubenswrapper[4975]: I0122 11:01:27.782996 4975 generic.go:334] "Generic (PLEG): container finished" podID="d6c907cc-684d-40a8-905a-977845138321" containerID="42fa6f0f6705faefedde9c58bc9504aaa5b18846954a290dd491a84ea888ab06" exitCode=0 Jan 22 11:01:27 crc kubenswrapper[4975]: I0122 11:01:27.791185 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" event={"ID":"d6c907cc-684d-40a8-905a-977845138321","Type":"ContainerDied","Data":"42fa6f0f6705faefedde9c58bc9504aaa5b18846954a290dd491a84ea888ab06"} Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.230029 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.414536 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-ssh-key-openstack-edpm-ipam\") pod \"d6c907cc-684d-40a8-905a-977845138321\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.414980 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2qsx\" (UniqueName: \"kubernetes.io/projected/d6c907cc-684d-40a8-905a-977845138321-kube-api-access-w2qsx\") pod \"d6c907cc-684d-40a8-905a-977845138321\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.415052 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-inventory\") pod \"d6c907cc-684d-40a8-905a-977845138321\" (UID: \"d6c907cc-684d-40a8-905a-977845138321\") " Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.426080 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6c907cc-684d-40a8-905a-977845138321-kube-api-access-w2qsx" (OuterVolumeSpecName: "kube-api-access-w2qsx") pod "d6c907cc-684d-40a8-905a-977845138321" (UID: "d6c907cc-684d-40a8-905a-977845138321"). InnerVolumeSpecName "kube-api-access-w2qsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.451526 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6c907cc-684d-40a8-905a-977845138321" (UID: "d6c907cc-684d-40a8-905a-977845138321"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.461639 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-inventory" (OuterVolumeSpecName: "inventory") pod "d6c907cc-684d-40a8-905a-977845138321" (UID: "d6c907cc-684d-40a8-905a-977845138321"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.517433 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2qsx\" (UniqueName: \"kubernetes.io/projected/d6c907cc-684d-40a8-905a-977845138321-kube-api-access-w2qsx\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.517469 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.517479 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6c907cc-684d-40a8-905a-977845138321-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.809743 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" event={"ID":"d6c907cc-684d-40a8-905a-977845138321","Type":"ContainerDied","Data":"722cf3dd4690d7da5645edd05b8c7f9e69a0bdfdd95bd48d1db2c72c3562c5a9"} Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.809794 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="722cf3dd4690d7da5645edd05b8c7f9e69a0bdfdd95bd48d1db2c72c3562c5a9" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.809842 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8v6sw" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.902877 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c"] Jan 22 11:01:29 crc kubenswrapper[4975]: E0122 11:01:29.903430 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6c907cc-684d-40a8-905a-977845138321" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.903448 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6c907cc-684d-40a8-905a-977845138321" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.903671 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6c907cc-684d-40a8-905a-977845138321" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.904455 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.906897 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.907075 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.907236 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.909092 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.919509 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c"] Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.935757 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.935832 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrgx\" (UniqueName: \"kubernetes.io/projected/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-kube-api-access-ztrgx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.936075 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:29 crc kubenswrapper[4975]: I0122 11:01:29.936275 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.037601 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.037662 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrgx\" (UniqueName: \"kubernetes.io/projected/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-kube-api-access-ztrgx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.037751 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.037832 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.041810 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.041836 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.052466 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.057298 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrgx\" (UniqueName: \"kubernetes.io/projected/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-kube-api-access-ztrgx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.235450 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.861011 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c"] Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.963711 4975 scope.go:117] "RemoveContainer" containerID="b647f21bef594ae123a6823ccf63a27a25ce7935528764215977800760a8b8d7" Jan 22 11:01:30 crc kubenswrapper[4975]: I0122 11:01:30.988968 4975 scope.go:117] "RemoveContainer" containerID="7f6c483d627120d86189c836db318cf572a7bb52c27ffba7e59b0671a320cfb0" Jan 22 11:01:31 crc kubenswrapper[4975]: I0122 11:01:31.039034 4975 scope.go:117] "RemoveContainer" containerID="aa65bf0422d1f3ad1d38f517328c447c1e5535727f8eac18f932e4a93a8732d4" Jan 22 11:01:31 crc kubenswrapper[4975]: I0122 11:01:31.824744 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" event={"ID":"3e928e99-1e6c-4d3e-a7f3-bd725e29c415","Type":"ContainerStarted","Data":"f89667a273bda3ab1051e657fd4abba2854c8635bed7762b26ec8281cf2b63fc"} Jan 22 11:01:32 crc kubenswrapper[4975]: I0122 11:01:32.838807 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" event={"ID":"3e928e99-1e6c-4d3e-a7f3-bd725e29c415","Type":"ContainerStarted","Data":"257e4501d262ad81c56f82f4fa010953f2992a7d24a10dd41b208ef9782edc21"} Jan 22 11:01:32 crc kubenswrapper[4975]: I0122 11:01:32.870449 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" podStartSLOduration=2.259486208 podStartE2EDuration="3.870423252s" podCreationTimestamp="2026-01-22 11:01:29 +0000 UTC" firstStartedPulling="2026-01-22 11:01:30.865784763 +0000 UTC m=+1483.523820873" lastFinishedPulling="2026-01-22 11:01:32.476721807 +0000 UTC m=+1485.134757917" observedRunningTime="2026-01-22 11:01:32.858870271 +0000 UTC m=+1485.516906431" watchObservedRunningTime="2026-01-22 11:01:32.870423252 +0000 UTC m=+1485.528459392" Jan 22 11:02:27 crc kubenswrapper[4975]: I0122 11:02:27.436906 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:02:27 crc kubenswrapper[4975]: I0122 11:02:27.437658 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:02:31 crc kubenswrapper[4975]: I0122 11:02:31.148769 4975 scope.go:117] "RemoveContainer" containerID="a32f2da6007acfc6594da7a7527f3ced6950bdb96d1b93bb33280ee29f9dec37" Jan 22 11:02:57 crc kubenswrapper[4975]: I0122 11:02:57.436837 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:02:57 crc kubenswrapper[4975]: I0122 11:02:57.437801 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.187540 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rnd25"] Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.192500 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.203019 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgkp\" (UniqueName: \"kubernetes.io/projected/d16a6f99-522a-49b6-8938-a0ac3ff36c33-kube-api-access-nfgkp\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.203402 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-utilities\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.203737 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-catalog-content\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.230007 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnd25"] Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.306120 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-catalog-content\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.306232 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgkp\" (UniqueName: \"kubernetes.io/projected/d16a6f99-522a-49b6-8938-a0ac3ff36c33-kube-api-access-nfgkp\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.306315 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-utilities\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.306731 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-catalog-content\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.306820 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-utilities\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.324285 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgkp\" (UniqueName: \"kubernetes.io/projected/d16a6f99-522a-49b6-8938-a0ac3ff36c33-kube-api-access-nfgkp\") pod \"community-operators-rnd25\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:20 crc kubenswrapper[4975]: I0122 11:03:20.527703 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:21 crc kubenswrapper[4975]: I0122 11:03:21.043058 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnd25"] Jan 22 11:03:21 crc kubenswrapper[4975]: I0122 11:03:21.073882 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerStarted","Data":"d70ecf2f7041779a4399b81b1aa77aab6f09e72caff70035a0e7e78e156d02f2"} Jan 22 11:03:22 crc kubenswrapper[4975]: I0122 11:03:22.087274 4975 generic.go:334] "Generic (PLEG): container finished" podID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerID="5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959" exitCode=0 Jan 22 11:03:22 crc kubenswrapper[4975]: I0122 11:03:22.087367 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerDied","Data":"5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959"} Jan 22 11:03:23 crc kubenswrapper[4975]: I0122 11:03:23.098172 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerStarted","Data":"2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649"} Jan 22 11:03:24 crc kubenswrapper[4975]: I0122 11:03:24.116856 4975 generic.go:334] "Generic (PLEG): container finished" podID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerID="2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649" exitCode=0 Jan 22 11:03:24 crc kubenswrapper[4975]: I0122 11:03:24.117253 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerDied","Data":"2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649"} Jan 22 11:03:25 crc kubenswrapper[4975]: I0122 11:03:25.133409 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerStarted","Data":"78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5"} Jan 22 11:03:25 crc kubenswrapper[4975]: I0122 11:03:25.160664 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rnd25" podStartSLOduration=2.746121136 podStartE2EDuration="5.160645883s" podCreationTimestamp="2026-01-22 11:03:20 +0000 UTC" firstStartedPulling="2026-01-22 11:03:22.090183614 +0000 UTC m=+1594.748219754" lastFinishedPulling="2026-01-22 11:03:24.504708391 +0000 UTC m=+1597.162744501" observedRunningTime="2026-01-22 11:03:25.158561677 +0000 UTC m=+1597.816597817" watchObservedRunningTime="2026-01-22 11:03:25.160645883 +0000 UTC m=+1597.818681993" Jan 22 11:03:27 crc kubenswrapper[4975]: I0122 11:03:27.436159 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:03:27 crc kubenswrapper[4975]: I0122 11:03:27.437090 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:03:27 crc kubenswrapper[4975]: I0122 11:03:27.437230 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:03:27 crc kubenswrapper[4975]: I0122 11:03:27.437997 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:03:27 crc kubenswrapper[4975]: I0122 11:03:27.438175 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" gracePeriod=600 Jan 22 11:03:27 crc kubenswrapper[4975]: E0122 11:03:27.565382 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:03:28 crc kubenswrapper[4975]: I0122 11:03:28.165745 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" exitCode=0 Jan 22 11:03:28 crc kubenswrapper[4975]: I0122 11:03:28.165784 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398"} Jan 22 11:03:28 crc kubenswrapper[4975]: I0122 11:03:28.165852 4975 scope.go:117] "RemoveContainer" containerID="16cfad28f24f35696e7c92e7ab98650eab8e998609b5fcdbdad61136fb9627fd" Jan 22 11:03:28 crc kubenswrapper[4975]: I0122 11:03:28.166773 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:03:28 crc kubenswrapper[4975]: E0122 11:03:28.167269 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:03:30 crc kubenswrapper[4975]: I0122 11:03:30.528141 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:30 crc kubenswrapper[4975]: I0122 11:03:30.528577 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:30 crc kubenswrapper[4975]: I0122 11:03:30.606115 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:31 crc kubenswrapper[4975]: I0122 11:03:31.253584 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:31 crc kubenswrapper[4975]: I0122 11:03:31.301694 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnd25"] Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.226844 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rnd25" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="registry-server" containerID="cri-o://78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5" gracePeriod=2 Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.293073 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4mxs"] Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.299952 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.313160 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4mxs"] Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.417119 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-catalog-content\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.417199 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-utilities\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.417256 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8hg6\" (UniqueName: \"kubernetes.io/projected/718752bc-a937-4f5a-90de-e46c09f42b6a-kube-api-access-m8hg6\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.519277 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-catalog-content\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.519398 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-utilities\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.519449 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8hg6\" (UniqueName: \"kubernetes.io/projected/718752bc-a937-4f5a-90de-e46c09f42b6a-kube-api-access-m8hg6\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.520240 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-catalog-content\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.520544 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-utilities\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.542671 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8hg6\" (UniqueName: \"kubernetes.io/projected/718752bc-a937-4f5a-90de-e46c09f42b6a-kube-api-access-m8hg6\") pod \"certified-operators-g4mxs\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.703081 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.727367 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.824672 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-utilities\") pod \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.824741 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgkp\" (UniqueName: \"kubernetes.io/projected/d16a6f99-522a-49b6-8938-a0ac3ff36c33-kube-api-access-nfgkp\") pod \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.824846 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-catalog-content\") pod \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\" (UID: \"d16a6f99-522a-49b6-8938-a0ac3ff36c33\") " Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.826402 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-utilities" (OuterVolumeSpecName: "utilities") pod "d16a6f99-522a-49b6-8938-a0ac3ff36c33" (UID: "d16a6f99-522a-49b6-8938-a0ac3ff36c33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.837591 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16a6f99-522a-49b6-8938-a0ac3ff36c33-kube-api-access-nfgkp" (OuterVolumeSpecName: "kube-api-access-nfgkp") pod "d16a6f99-522a-49b6-8938-a0ac3ff36c33" (UID: "d16a6f99-522a-49b6-8938-a0ac3ff36c33"). InnerVolumeSpecName "kube-api-access-nfgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.931238 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:33 crc kubenswrapper[4975]: I0122 11:03:33.931654 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgkp\" (UniqueName: \"kubernetes.io/projected/d16a6f99-522a-49b6-8938-a0ac3ff36c33-kube-api-access-nfgkp\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.213218 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d16a6f99-522a-49b6-8938-a0ac3ff36c33" (UID: "d16a6f99-522a-49b6-8938-a0ac3ff36c33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.239083 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d16a6f99-522a-49b6-8938-a0ac3ff36c33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.243034 4975 generic.go:334] "Generic (PLEG): container finished" podID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerID="78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5" exitCode=0 Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.243118 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnd25" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.243106 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerDied","Data":"78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5"} Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.243196 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnd25" event={"ID":"d16a6f99-522a-49b6-8938-a0ac3ff36c33","Type":"ContainerDied","Data":"d70ecf2f7041779a4399b81b1aa77aab6f09e72caff70035a0e7e78e156d02f2"} Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.243244 4975 scope.go:117] "RemoveContainer" containerID="78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.288937 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4mxs"] Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.292930 4975 scope.go:117] "RemoveContainer" containerID="2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.314685 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnd25"] Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.321398 4975 scope.go:117] "RemoveContainer" containerID="5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.329148 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rnd25"] Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.359562 4975 scope.go:117] "RemoveContainer" containerID="78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5" Jan 22 11:03:34 crc kubenswrapper[4975]: E0122 11:03:34.360079 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5\": container with ID starting with 78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5 not found: ID does not exist" containerID="78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.360109 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5"} err="failed to get container status \"78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5\": rpc error: code = NotFound desc = could not find container \"78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5\": container with ID starting with 78ac620e071ed3baca030a81b4f2844831c658231744bd91c072efb8725716f5 not found: ID does not exist" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.360132 4975 scope.go:117] "RemoveContainer" containerID="2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649" Jan 22 11:03:34 crc kubenswrapper[4975]: E0122 11:03:34.360485 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649\": container with ID starting with 2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649 not found: ID does not exist" containerID="2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.360509 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649"} err="failed to get container status \"2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649\": rpc error: code = NotFound desc = could not find container \"2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649\": container with ID starting with 2a03cd80bcf2640349613a0f1956a358d1f5dbd454c5ed7a5c0597deb3a5a649 not found: ID does not exist" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.360522 4975 scope.go:117] "RemoveContainer" containerID="5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959" Jan 22 11:03:34 crc kubenswrapper[4975]: E0122 11:03:34.360880 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959\": container with ID starting with 5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959 not found: ID does not exist" containerID="5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959" Jan 22 11:03:34 crc kubenswrapper[4975]: I0122 11:03:34.360906 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959"} err="failed to get container status \"5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959\": rpc error: code = NotFound desc = could not find container \"5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959\": container with ID starting with 5c94106541dc5583f35fbec2066d586aba57d815b1295eeff355f996f2bfb959 not found: ID does not exist" Jan 22 11:03:35 crc kubenswrapper[4975]: I0122 11:03:35.255742 4975 generic.go:334] "Generic (PLEG): container finished" podID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerID="8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971" exitCode=0 Jan 22 11:03:35 crc kubenswrapper[4975]: I0122 11:03:35.255825 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4mxs" event={"ID":"718752bc-a937-4f5a-90de-e46c09f42b6a","Type":"ContainerDied","Data":"8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971"} Jan 22 11:03:35 crc kubenswrapper[4975]: I0122 11:03:35.256119 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4mxs" event={"ID":"718752bc-a937-4f5a-90de-e46c09f42b6a","Type":"ContainerStarted","Data":"b6bc6f81d61d7fffa66369f9ca1374d9f0f9dd6dee8a18dfddc5f70becfc074c"} Jan 22 11:03:35 crc kubenswrapper[4975]: I0122 11:03:35.777279 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" path="/var/lib/kubelet/pods/d16a6f99-522a-49b6-8938-a0ac3ff36c33/volumes" Jan 22 11:03:37 crc kubenswrapper[4975]: I0122 11:03:37.316921 4975 generic.go:334] "Generic (PLEG): container finished" podID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerID="50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1" exitCode=0 Jan 22 11:03:37 crc kubenswrapper[4975]: I0122 11:03:37.316997 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4mxs" event={"ID":"718752bc-a937-4f5a-90de-e46c09f42b6a","Type":"ContainerDied","Data":"50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1"} Jan 22 11:03:38 crc kubenswrapper[4975]: I0122 11:03:38.332879 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4mxs" event={"ID":"718752bc-a937-4f5a-90de-e46c09f42b6a","Type":"ContainerStarted","Data":"cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959"} Jan 22 11:03:38 crc kubenswrapper[4975]: I0122 11:03:38.373234 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4mxs" podStartSLOduration=2.870358308 podStartE2EDuration="5.373207858s" podCreationTimestamp="2026-01-22 11:03:33 +0000 UTC" firstStartedPulling="2026-01-22 11:03:35.25880491 +0000 UTC m=+1607.916841030" lastFinishedPulling="2026-01-22 11:03:37.76165445 +0000 UTC m=+1610.419690580" observedRunningTime="2026-01-22 11:03:38.360065537 +0000 UTC m=+1611.018101687" watchObservedRunningTime="2026-01-22 11:03:38.373207858 +0000 UTC m=+1611.031244008" Jan 22 11:03:43 crc kubenswrapper[4975]: I0122 11:03:43.728073 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:43 crc kubenswrapper[4975]: I0122 11:03:43.728685 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:43 crc kubenswrapper[4975]: I0122 11:03:43.755239 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:03:43 crc kubenswrapper[4975]: E0122 11:03:43.755968 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:03:43 crc kubenswrapper[4975]: I0122 11:03:43.786453 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:44 crc kubenswrapper[4975]: I0122 11:03:44.486532 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:44 crc kubenswrapper[4975]: I0122 11:03:44.545539 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4mxs"] Jan 22 11:03:46 crc kubenswrapper[4975]: I0122 11:03:46.434961 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4mxs" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="registry-server" containerID="cri-o://cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959" gracePeriod=2 Jan 22 11:03:46 crc kubenswrapper[4975]: I0122 11:03:46.976010 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.123269 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-utilities\") pod \"718752bc-a937-4f5a-90de-e46c09f42b6a\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.123464 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-catalog-content\") pod \"718752bc-a937-4f5a-90de-e46c09f42b6a\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.124122 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-utilities" (OuterVolumeSpecName: "utilities") pod "718752bc-a937-4f5a-90de-e46c09f42b6a" (UID: "718752bc-a937-4f5a-90de-e46c09f42b6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.125129 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8hg6\" (UniqueName: \"kubernetes.io/projected/718752bc-a937-4f5a-90de-e46c09f42b6a-kube-api-access-m8hg6\") pod \"718752bc-a937-4f5a-90de-e46c09f42b6a\" (UID: \"718752bc-a937-4f5a-90de-e46c09f42b6a\") " Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.126046 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.130497 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718752bc-a937-4f5a-90de-e46c09f42b6a-kube-api-access-m8hg6" (OuterVolumeSpecName: "kube-api-access-m8hg6") pod "718752bc-a937-4f5a-90de-e46c09f42b6a" (UID: "718752bc-a937-4f5a-90de-e46c09f42b6a"). InnerVolumeSpecName "kube-api-access-m8hg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.171707 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "718752bc-a937-4f5a-90de-e46c09f42b6a" (UID: "718752bc-a937-4f5a-90de-e46c09f42b6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.227768 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718752bc-a937-4f5a-90de-e46c09f42b6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.227819 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8hg6\" (UniqueName: \"kubernetes.io/projected/718752bc-a937-4f5a-90de-e46c09f42b6a-kube-api-access-m8hg6\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.449026 4975 generic.go:334] "Generic (PLEG): container finished" podID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerID="cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959" exitCode=0 Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.449076 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4mxs" event={"ID":"718752bc-a937-4f5a-90de-e46c09f42b6a","Type":"ContainerDied","Data":"cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959"} Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.449107 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4mxs" event={"ID":"718752bc-a937-4f5a-90de-e46c09f42b6a","Type":"ContainerDied","Data":"b6bc6f81d61d7fffa66369f9ca1374d9f0f9dd6dee8a18dfddc5f70becfc074c"} Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.449129 4975 scope.go:117] "RemoveContainer" containerID="cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.449135 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4mxs" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.486002 4975 scope.go:117] "RemoveContainer" containerID="50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.497863 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4mxs"] Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.509061 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4mxs"] Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.521306 4975 scope.go:117] "RemoveContainer" containerID="8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.582054 4975 scope.go:117] "RemoveContainer" containerID="cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959" Jan 22 11:03:47 crc kubenswrapper[4975]: E0122 11:03:47.583063 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959\": container with ID starting with cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959 not found: ID does not exist" containerID="cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.583111 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959"} err="failed to get container status \"cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959\": rpc error: code = NotFound desc = could not find container \"cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959\": container with ID starting with cfd8b1e5bba04928cbe9608a077fecb9ad36308fcc486edf9ba341a1ee7b8959 not found: ID does not exist" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.583161 4975 scope.go:117] "RemoveContainer" containerID="50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1" Jan 22 11:03:47 crc kubenswrapper[4975]: E0122 11:03:47.583490 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1\": container with ID starting with 50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1 not found: ID does not exist" containerID="50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.583538 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1"} err="failed to get container status \"50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1\": rpc error: code = NotFound desc = could not find container \"50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1\": container with ID starting with 50868d8bd5a7852a0ca725770d98eb14f23e239c7133c5f59702a6cabfc761b1 not found: ID does not exist" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.583565 4975 scope.go:117] "RemoveContainer" containerID="8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971" Jan 22 11:03:47 crc kubenswrapper[4975]: E0122 11:03:47.583807 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971\": container with ID starting with 8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971 not found: ID does not exist" containerID="8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.583845 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971"} err="failed to get container status \"8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971\": rpc error: code = NotFound desc = could not find container \"8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971\": container with ID starting with 8ef8fc3f0006e88bd5f23552866b713563d16626d983092987492240c6e6e971 not found: ID does not exist" Jan 22 11:03:47 crc kubenswrapper[4975]: I0122 11:03:47.781373 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" path="/var/lib/kubelet/pods/718752bc-a937-4f5a-90de-e46c09f42b6a/volumes" Jan 22 11:03:54 crc kubenswrapper[4975]: I0122 11:03:54.756215 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:03:54 crc kubenswrapper[4975]: E0122 11:03:54.757461 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.242320 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4w54f"] Jan 22 11:03:58 crc kubenswrapper[4975]: E0122 11:03:58.243277 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="extract-content" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243303 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="extract-content" Jan 22 11:03:58 crc kubenswrapper[4975]: E0122 11:03:58.243397 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="extract-utilities" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243417 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="extract-utilities" Jan 22 11:03:58 crc kubenswrapper[4975]: E0122 11:03:58.243446 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="registry-server" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243460 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="registry-server" Jan 22 11:03:58 crc kubenswrapper[4975]: E0122 11:03:58.243484 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="registry-server" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243497 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="registry-server" Jan 22 11:03:58 crc kubenswrapper[4975]: E0122 11:03:58.243518 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="extract-utilities" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243529 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="extract-utilities" Jan 22 11:03:58 crc kubenswrapper[4975]: E0122 11:03:58.243563 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="extract-content" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243574 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="extract-content" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243905 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="718752bc-a937-4f5a-90de-e46c09f42b6a" containerName="registry-server" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.243944 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16a6f99-522a-49b6-8938-a0ac3ff36c33" containerName="registry-server" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.248735 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.266793 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w54f"] Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.374316 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-utilities\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.374444 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-catalog-content\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.374488 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6hp\" (UniqueName: \"kubernetes.io/projected/9595946b-a571-4e7c-b84b-35a6e9f05baa-kube-api-access-4f6hp\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.476745 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-catalog-content\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.476853 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f6hp\" (UniqueName: \"kubernetes.io/projected/9595946b-a571-4e7c-b84b-35a6e9f05baa-kube-api-access-4f6hp\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.476957 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-utilities\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.477737 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-utilities\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.478161 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-catalog-content\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.501267 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f6hp\" (UniqueName: \"kubernetes.io/projected/9595946b-a571-4e7c-b84b-35a6e9f05baa-kube-api-access-4f6hp\") pod \"redhat-marketplace-4w54f\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:58 crc kubenswrapper[4975]: I0122 11:03:58.585749 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:03:59 crc kubenswrapper[4975]: I0122 11:03:59.057516 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w54f"] Jan 22 11:03:59 crc kubenswrapper[4975]: I0122 11:03:59.583705 4975 generic.go:334] "Generic (PLEG): container finished" podID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerID="9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d" exitCode=0 Jan 22 11:03:59 crc kubenswrapper[4975]: I0122 11:03:59.583774 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w54f" event={"ID":"9595946b-a571-4e7c-b84b-35a6e9f05baa","Type":"ContainerDied","Data":"9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d"} Jan 22 11:03:59 crc kubenswrapper[4975]: I0122 11:03:59.585031 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w54f" event={"ID":"9595946b-a571-4e7c-b84b-35a6e9f05baa","Type":"ContainerStarted","Data":"c82b7126a0501fa89d6d67e8ba4f97723297002352c987bd7bf6f93d15fd66de"} Jan 22 11:04:00 crc kubenswrapper[4975]: I0122 11:04:00.595722 4975 generic.go:334] "Generic (PLEG): container finished" podID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerID="3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30" exitCode=0 Jan 22 11:04:00 crc kubenswrapper[4975]: I0122 11:04:00.595802 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w54f" event={"ID":"9595946b-a571-4e7c-b84b-35a6e9f05baa","Type":"ContainerDied","Data":"3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30"} Jan 22 11:04:00 crc kubenswrapper[4975]: E0122 11:04:00.736533 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9595946b_a571_4e7c_b84b_35a6e9f05baa.slice/crio-3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9595946b_a571_4e7c_b84b_35a6e9f05baa.slice/crio-conmon-3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:04:01 crc kubenswrapper[4975]: I0122 11:04:01.610788 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w54f" event={"ID":"9595946b-a571-4e7c-b84b-35a6e9f05baa","Type":"ContainerStarted","Data":"40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b"} Jan 22 11:04:08 crc kubenswrapper[4975]: I0122 11:04:08.585803 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:04:08 crc kubenswrapper[4975]: I0122 11:04:08.586411 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:04:08 crc kubenswrapper[4975]: I0122 11:04:08.654982 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:04:08 crc kubenswrapper[4975]: I0122 11:04:08.675537 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4w54f" podStartSLOduration=9.260151855 podStartE2EDuration="10.675507826s" podCreationTimestamp="2026-01-22 11:03:58 +0000 UTC" firstStartedPulling="2026-01-22 11:03:59.585545648 +0000 UTC m=+1632.243581758" lastFinishedPulling="2026-01-22 11:04:01.000901619 +0000 UTC m=+1633.658937729" observedRunningTime="2026-01-22 11:04:01.63415325 +0000 UTC m=+1634.292189390" watchObservedRunningTime="2026-01-22 11:04:08.675507826 +0000 UTC m=+1641.333543996" Jan 22 11:04:08 crc kubenswrapper[4975]: I0122 11:04:08.779401 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:04:08 crc kubenswrapper[4975]: I0122 11:04:08.899993 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w54f"] Jan 22 11:04:09 crc kubenswrapper[4975]: I0122 11:04:09.754892 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:04:09 crc kubenswrapper[4975]: E0122 11:04:09.755234 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:04:10 crc kubenswrapper[4975]: I0122 11:04:10.748202 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4w54f" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="registry-server" containerID="cri-o://40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b" gracePeriod=2 Jan 22 11:04:10 crc kubenswrapper[4975]: E0122 11:04:10.977469 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9595946b_a571_4e7c_b84b_35a6e9f05baa.slice/crio-conmon-40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9595946b_a571_4e7c_b84b_35a6e9f05baa.slice/crio-40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.222399 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.372667 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-catalog-content\") pod \"9595946b-a571-4e7c-b84b-35a6e9f05baa\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.372724 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f6hp\" (UniqueName: \"kubernetes.io/projected/9595946b-a571-4e7c-b84b-35a6e9f05baa-kube-api-access-4f6hp\") pod \"9595946b-a571-4e7c-b84b-35a6e9f05baa\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.372809 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-utilities\") pod \"9595946b-a571-4e7c-b84b-35a6e9f05baa\" (UID: \"9595946b-a571-4e7c-b84b-35a6e9f05baa\") " Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.373981 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-utilities" (OuterVolumeSpecName: "utilities") pod "9595946b-a571-4e7c-b84b-35a6e9f05baa" (UID: "9595946b-a571-4e7c-b84b-35a6e9f05baa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.379834 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9595946b-a571-4e7c-b84b-35a6e9f05baa-kube-api-access-4f6hp" (OuterVolumeSpecName: "kube-api-access-4f6hp") pod "9595946b-a571-4e7c-b84b-35a6e9f05baa" (UID: "9595946b-a571-4e7c-b84b-35a6e9f05baa"). InnerVolumeSpecName "kube-api-access-4f6hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.397186 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9595946b-a571-4e7c-b84b-35a6e9f05baa" (UID: "9595946b-a571-4e7c-b84b-35a6e9f05baa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.475553 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.475604 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9595946b-a571-4e7c-b84b-35a6e9f05baa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.475625 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f6hp\" (UniqueName: \"kubernetes.io/projected/9595946b-a571-4e7c-b84b-35a6e9f05baa-kube-api-access-4f6hp\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.781791 4975 generic.go:334] "Generic (PLEG): container finished" podID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerID="40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b" exitCode=0 Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.782134 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w54f" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.785390 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w54f" event={"ID":"9595946b-a571-4e7c-b84b-35a6e9f05baa","Type":"ContainerDied","Data":"40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b"} Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.785443 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w54f" event={"ID":"9595946b-a571-4e7c-b84b-35a6e9f05baa","Type":"ContainerDied","Data":"c82b7126a0501fa89d6d67e8ba4f97723297002352c987bd7bf6f93d15fd66de"} Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.785469 4975 scope.go:117] "RemoveContainer" containerID="40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.818628 4975 scope.go:117] "RemoveContainer" containerID="3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30" Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.849584 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w54f"] Jan 22 11:04:11 crc kubenswrapper[4975]: I0122 11:04:11.860970 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w54f"] Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.049232 4975 scope.go:117] "RemoveContainer" containerID="9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d" Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.068158 4975 scope.go:117] "RemoveContainer" containerID="40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b" Jan 22 11:04:12 crc kubenswrapper[4975]: E0122 11:04:12.068682 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b\": container with ID starting with 40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b not found: ID does not exist" containerID="40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b" Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.068729 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b"} err="failed to get container status \"40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b\": rpc error: code = NotFound desc = could not find container \"40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b\": container with ID starting with 40886eb2a6370cbb94b0b4f14af032f821d40d5f303b11ff4e1a6b8deb591c2b not found: ID does not exist" Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.068759 4975 scope.go:117] "RemoveContainer" containerID="3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30" Jan 22 11:04:12 crc kubenswrapper[4975]: E0122 11:04:12.069238 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30\": container with ID starting with 3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30 not found: ID does not exist" containerID="3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30" Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.069272 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30"} err="failed to get container status \"3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30\": rpc error: code = NotFound desc = could not find container \"3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30\": container with ID starting with 3341bd03857acef3f0f593335364db851b5cf1f82c5158261612081a144bdc30 not found: ID does not exist" Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.069298 4975 scope.go:117] "RemoveContainer" containerID="9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d" Jan 22 11:04:12 crc kubenswrapper[4975]: E0122 11:04:12.069624 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d\": container with ID starting with 9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d not found: ID does not exist" containerID="9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d" Jan 22 11:04:12 crc kubenswrapper[4975]: I0122 11:04:12.069647 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d"} err="failed to get container status \"9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d\": rpc error: code = NotFound desc = could not find container \"9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d\": container with ID starting with 9263a33a3eff3b8271cc9ef390db8f15731ef5215a54acc7bb94317f285d538d not found: ID does not exist" Jan 22 11:04:13 crc kubenswrapper[4975]: I0122 11:04:13.774174 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" path="/var/lib/kubelet/pods/9595946b-a571-4e7c-b84b-35a6e9f05baa/volumes" Jan 22 11:04:21 crc kubenswrapper[4975]: I0122 11:04:21.755891 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:04:21 crc kubenswrapper[4975]: E0122 11:04:21.757016 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:04:34 crc kubenswrapper[4975]: I0122 11:04:34.755369 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:04:34 crc kubenswrapper[4975]: E0122 11:04:34.756490 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:04:48 crc kubenswrapper[4975]: I0122 11:04:48.756617 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:04:48 crc kubenswrapper[4975]: E0122 11:04:48.757449 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:04:53 crc kubenswrapper[4975]: I0122 11:04:53.261863 4975 generic.go:334] "Generic (PLEG): container finished" podID="3e928e99-1e6c-4d3e-a7f3-bd725e29c415" containerID="257e4501d262ad81c56f82f4fa010953f2992a7d24a10dd41b208ef9782edc21" exitCode=0 Jan 22 11:04:53 crc kubenswrapper[4975]: I0122 11:04:53.261951 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" event={"ID":"3e928e99-1e6c-4d3e-a7f3-bd725e29c415","Type":"ContainerDied","Data":"257e4501d262ad81c56f82f4fa010953f2992a7d24a10dd41b208ef9782edc21"} Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.706486 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.842434 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-inventory\") pod \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.842676 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-bootstrap-combined-ca-bundle\") pod \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.842773 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrgx\" (UniqueName: \"kubernetes.io/projected/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-kube-api-access-ztrgx\") pod \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.843711 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-ssh-key-openstack-edpm-ipam\") pod \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\" (UID: \"3e928e99-1e6c-4d3e-a7f3-bd725e29c415\") " Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.847531 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3e928e99-1e6c-4d3e-a7f3-bd725e29c415" (UID: "3e928e99-1e6c-4d3e-a7f3-bd725e29c415"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.848784 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-kube-api-access-ztrgx" (OuterVolumeSpecName: "kube-api-access-ztrgx") pod "3e928e99-1e6c-4d3e-a7f3-bd725e29c415" (UID: "3e928e99-1e6c-4d3e-a7f3-bd725e29c415"). InnerVolumeSpecName "kube-api-access-ztrgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.870514 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e928e99-1e6c-4d3e-a7f3-bd725e29c415" (UID: "3e928e99-1e6c-4d3e-a7f3-bd725e29c415"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.875232 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-inventory" (OuterVolumeSpecName: "inventory") pod "3e928e99-1e6c-4d3e-a7f3-bd725e29c415" (UID: "3e928e99-1e6c-4d3e-a7f3-bd725e29c415"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.946470 4975 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.946522 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrgx\" (UniqueName: \"kubernetes.io/projected/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-kube-api-access-ztrgx\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.946540 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:54 crc kubenswrapper[4975]: I0122 11:04:54.946559 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e928e99-1e6c-4d3e-a7f3-bd725e29c415-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.041371 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vqxns"] Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.052234 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vqxns"] Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.287471 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" event={"ID":"3e928e99-1e6c-4d3e-a7f3-bd725e29c415","Type":"ContainerDied","Data":"f89667a273bda3ab1051e657fd4abba2854c8635bed7762b26ec8281cf2b63fc"} Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.287538 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89667a273bda3ab1051e657fd4abba2854c8635bed7762b26ec8281cf2b63fc" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.287833 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393046 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47"] Jan 22 11:04:55 crc kubenswrapper[4975]: E0122 11:04:55.393412 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="registry-server" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393429 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="registry-server" Jan 22 11:04:55 crc kubenswrapper[4975]: E0122 11:04:55.393440 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e928e99-1e6c-4d3e-a7f3-bd725e29c415" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393447 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e928e99-1e6c-4d3e-a7f3-bd725e29c415" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 11:04:55 crc kubenswrapper[4975]: E0122 11:04:55.393459 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="extract-utilities" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393465 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="extract-utilities" Jan 22 11:04:55 crc kubenswrapper[4975]: E0122 11:04:55.393476 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="extract-content" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393483 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="extract-content" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393655 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e928e99-1e6c-4d3e-a7f3-bd725e29c415" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.393676 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9595946b-a571-4e7c-b84b-35a6e9f05baa" containerName="registry-server" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.394227 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.396906 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.396963 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.398058 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.400786 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.411648 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47"] Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.555480 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.555566 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg7hf\" (UniqueName: \"kubernetes.io/projected/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-kube-api-access-vg7hf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.555596 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.657471 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.658284 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg7hf\" (UniqueName: \"kubernetes.io/projected/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-kube-api-access-vg7hf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.658326 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.663980 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.666624 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.680047 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg7hf\" (UniqueName: \"kubernetes.io/projected/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-kube-api-access-vg7hf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l7t47\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.750503 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:04:55 crc kubenswrapper[4975]: I0122 11:04:55.766134 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc2626f-8d55-44c9-8478-cafd40077961" path="/var/lib/kubelet/pods/2dc2626f-8d55-44c9-8478-cafd40077961/volumes" Jan 22 11:04:56 crc kubenswrapper[4975]: I0122 11:04:56.303372 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47"] Jan 22 11:04:56 crc kubenswrapper[4975]: W0122 11:04:56.306994 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2579a01_95bb_4c7c_8bf5_c0c81586e31d.slice/crio-229255cde1c8d2d6165db825356fd278140c06481681da304b0ecdc9f6e66a77 WatchSource:0}: Error finding container 229255cde1c8d2d6165db825356fd278140c06481681da304b0ecdc9f6e66a77: Status 404 returned error can't find the container with id 229255cde1c8d2d6165db825356fd278140c06481681da304b0ecdc9f6e66a77 Jan 22 11:04:56 crc kubenswrapper[4975]: I0122 11:04:56.311181 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:04:57 crc kubenswrapper[4975]: I0122 11:04:57.027486 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ce3e-account-create-update-pqlkf"] Jan 22 11:04:57 crc kubenswrapper[4975]: I0122 11:04:57.035409 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ce3e-account-create-update-pqlkf"] Jan 22 11:04:57 crc kubenswrapper[4975]: I0122 11:04:57.316881 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" event={"ID":"a2579a01-95bb-4c7c-8bf5-c0c81586e31d","Type":"ContainerStarted","Data":"e147ab213311914d678df603a1552d0478180daf262da7601f84bd07d1815417"} Jan 22 11:04:57 crc kubenswrapper[4975]: I0122 11:04:57.316934 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" event={"ID":"a2579a01-95bb-4c7c-8bf5-c0c81586e31d","Type":"ContainerStarted","Data":"229255cde1c8d2d6165db825356fd278140c06481681da304b0ecdc9f6e66a77"} Jan 22 11:04:57 crc kubenswrapper[4975]: I0122 11:04:57.358622 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" podStartSLOduration=1.919705922 podStartE2EDuration="2.358596363s" podCreationTimestamp="2026-01-22 11:04:55 +0000 UTC" firstStartedPulling="2026-01-22 11:04:56.310893951 +0000 UTC m=+1688.968930071" lastFinishedPulling="2026-01-22 11:04:56.749784402 +0000 UTC m=+1689.407820512" observedRunningTime="2026-01-22 11:04:57.345568895 +0000 UTC m=+1690.003605015" watchObservedRunningTime="2026-01-22 11:04:57.358596363 +0000 UTC m=+1690.016632493" Jan 22 11:04:57 crc kubenswrapper[4975]: I0122 11:04:57.778631 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f9507c-123d-4f8d-ae32-c8ffefc98594" path="/var/lib/kubelet/pods/16f9507c-123d-4f8d-ae32-c8ffefc98594/volumes" Jan 22 11:04:59 crc kubenswrapper[4975]: I0122 11:04:59.047789 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1e00-account-create-update-456qq"] Jan 22 11:04:59 crc kubenswrapper[4975]: I0122 11:04:59.065893 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1e00-account-create-update-456qq"] Jan 22 11:04:59 crc kubenswrapper[4975]: I0122 11:04:59.756098 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:04:59 crc kubenswrapper[4975]: E0122 11:04:59.756477 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:04:59 crc kubenswrapper[4975]: I0122 11:04:59.770774 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49d3932-3b57-4488-ae1b-3b2f95e6385f" path="/var/lib/kubelet/pods/d49d3932-3b57-4488-ae1b-3b2f95e6385f/volumes" Jan 22 11:05:00 crc kubenswrapper[4975]: I0122 11:05:00.030595 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-eff0-account-create-update-q2t5p"] Jan 22 11:05:00 crc kubenswrapper[4975]: I0122 11:05:00.041933 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-psnps"] Jan 22 11:05:00 crc kubenswrapper[4975]: I0122 11:05:00.052559 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-psnps"] Jan 22 11:05:00 crc kubenswrapper[4975]: I0122 11:05:00.064849 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-eff0-account-create-update-q2t5p"] Jan 22 11:05:00 crc kubenswrapper[4975]: I0122 11:05:00.072837 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b94rh"] Jan 22 11:05:00 crc kubenswrapper[4975]: I0122 11:05:00.080785 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b94rh"] Jan 22 11:05:01 crc kubenswrapper[4975]: I0122 11:05:01.767871 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="155e5871-28a0-48dc-9bbf-1445a44ad371" path="/var/lib/kubelet/pods/155e5871-28a0-48dc-9bbf-1445a44ad371/volumes" Jan 22 11:05:01 crc kubenswrapper[4975]: I0122 11:05:01.769284 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35572d60-24e6-424c-90a5-3c7566536764" path="/var/lib/kubelet/pods/35572d60-24e6-424c-90a5-3c7566536764/volumes" Jan 22 11:05:01 crc kubenswrapper[4975]: I0122 11:05:01.770415 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf" path="/var/lib/kubelet/pods/ce9a43fe-c0ea-4f62-bb5d-4a3c5c77ebaf/volumes" Jan 22 11:05:12 crc kubenswrapper[4975]: I0122 11:05:12.754770 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:05:12 crc kubenswrapper[4975]: E0122 11:05:12.755547 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:05:19 crc kubenswrapper[4975]: I0122 11:05:19.031852 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jdx5j"] Jan 22 11:05:19 crc kubenswrapper[4975]: I0122 11:05:19.042008 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jdx5j"] Jan 22 11:05:19 crc kubenswrapper[4975]: I0122 11:05:19.765194 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7feaf89-c705-432e-bbd4-a5ee64d7c971" path="/var/lib/kubelet/pods/f7feaf89-c705-432e-bbd4-a5ee64d7c971/volumes" Jan 22 11:05:26 crc kubenswrapper[4975]: I0122 11:05:26.059692 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4hdn6"] Jan 22 11:05:26 crc kubenswrapper[4975]: I0122 11:05:26.072502 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4hdn6"] Jan 22 11:05:27 crc kubenswrapper[4975]: I0122 11:05:27.770843 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:05:27 crc kubenswrapper[4975]: E0122 11:05:27.772237 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:05:27 crc kubenswrapper[4975]: I0122 11:05:27.773157 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f9a87e-8307-44d7-a333-8eb76169882b" path="/var/lib/kubelet/pods/c0f9a87e-8307-44d7-a333-8eb76169882b/volumes" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.323122 4975 scope.go:117] "RemoveContainer" containerID="bf52444ea72e2c6b386dfbea5730a7b5ce54b1c733c95a62e05867fa81baa174" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.365767 4975 scope.go:117] "RemoveContainer" containerID="c9c7dd51b729be2c9be43e6649a6b76a8b411cfc2c3af80cdc9174f59372196e" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.455076 4975 scope.go:117] "RemoveContainer" containerID="dfd40ac8494db26513883464d3fb20725e3df030b8c43ce8b054e4a98b0322fb" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.504200 4975 scope.go:117] "RemoveContainer" containerID="21d56dcce374be4360e8cdd9bb1732a2cf1d049931c93d5488e783434013e169" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.553735 4975 scope.go:117] "RemoveContainer" containerID="d6c3349ef786365d547d45984b689bb9eabedf3956d593a39c504f4d57e7cccc" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.602579 4975 scope.go:117] "RemoveContainer" containerID="a6938f1c38f78389ce0b30b55067b88b256f1eaceec9b54bf40ad3e84a2175f0" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.643816 4975 scope.go:117] "RemoveContainer" containerID="2deafc08e859d44a1a5d8a7587716e982c5ae6eb951bbeea2a680d96611cf705" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.671510 4975 scope.go:117] "RemoveContainer" containerID="ff6e45dfd32a446678f9bac33e1c9b7153a7ade6fe4fd2c771fc91296344d7b5" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.690385 4975 scope.go:117] "RemoveContainer" containerID="69e4f37b817c3814aa8e46c00065b5e6aa0f35dd8b3db880c4cfe0b2b3586440" Jan 22 11:05:31 crc kubenswrapper[4975]: I0122 11:05:31.715494 4975 scope.go:117] "RemoveContainer" containerID="bcd23be3a1cb1871320f315681136f76c91ab653587376c2d442474f19bb5f7d" Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.062157 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9f8d-account-create-update-458g6"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.090480 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e9a6-account-create-update-6bcvq"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.098207 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9f8d-account-create-update-458g6"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.104882 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-71db-account-create-update-g87h8"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.111294 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gwzsm"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.117581 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9wqpd"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.128086 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-48drg"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.138032 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e9a6-account-create-update-6bcvq"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.158932 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-71db-account-create-update-g87h8"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.171674 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9wqpd"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.186359 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-48drg"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.197243 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gwzsm"] Jan 22 11:05:42 crc kubenswrapper[4975]: I0122 11:05:42.755955 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:05:42 crc kubenswrapper[4975]: E0122 11:05:42.756175 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:05:43 crc kubenswrapper[4975]: I0122 11:05:43.771900 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="243d5a33-350a-4e38-8426-44a995a7f7dd" path="/var/lib/kubelet/pods/243d5a33-350a-4e38-8426-44a995a7f7dd/volumes" Jan 22 11:05:43 crc kubenswrapper[4975]: I0122 11:05:43.774140 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56199b69-ad45-4d1a-8c20-ff267c81eab0" path="/var/lib/kubelet/pods/56199b69-ad45-4d1a-8c20-ff267c81eab0/volumes" Jan 22 11:05:43 crc kubenswrapper[4975]: I0122 11:05:43.776002 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694b6014-c433-4b63-b320-93da981b6674" path="/var/lib/kubelet/pods/694b6014-c433-4b63-b320-93da981b6674/volumes" Jan 22 11:05:43 crc kubenswrapper[4975]: I0122 11:05:43.777430 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fe7f97-1158-437d-b365-2d2ae6ecd6b1" path="/var/lib/kubelet/pods/78fe7f97-1158-437d-b365-2d2ae6ecd6b1/volumes" Jan 22 11:05:43 crc kubenswrapper[4975]: I0122 11:05:43.779708 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc532fde-1153-48d1-b6f6-a7c4672549b2" path="/var/lib/kubelet/pods/cc532fde-1153-48d1-b6f6-a7c4672549b2/volumes" Jan 22 11:05:43 crc kubenswrapper[4975]: I0122 11:05:43.781154 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff" path="/var/lib/kubelet/pods/e6ccf962-2d8a-4dc5-b9f7-0e02dd1381ff/volumes" Jan 22 11:05:47 crc kubenswrapper[4975]: I0122 11:05:47.053796 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bmksp"] Jan 22 11:05:47 crc kubenswrapper[4975]: I0122 11:05:47.068643 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bmksp"] Jan 22 11:05:47 crc kubenswrapper[4975]: I0122 11:05:47.775076 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d3e762-9e06-4bac-9854-780c955df21e" path="/var/lib/kubelet/pods/70d3e762-9e06-4bac-9854-780c955df21e/volumes" Jan 22 11:05:56 crc kubenswrapper[4975]: I0122 11:05:56.755573 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:05:56 crc kubenswrapper[4975]: E0122 11:05:56.756613 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:06:09 crc kubenswrapper[4975]: I0122 11:06:09.755144 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:06:09 crc kubenswrapper[4975]: E0122 11:06:09.756428 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:06:23 crc kubenswrapper[4975]: I0122 11:06:23.755089 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:06:23 crc kubenswrapper[4975]: E0122 11:06:23.756139 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:06:31 crc kubenswrapper[4975]: I0122 11:06:31.870621 4975 scope.go:117] "RemoveContainer" containerID="71178dd7435780958d0a9b0a4da0a2a9975a4546dccc68f912a46198e7a950bb" Jan 22 11:06:31 crc kubenswrapper[4975]: I0122 11:06:31.903914 4975 scope.go:117] "RemoveContainer" containerID="6a46c7a143f953312f9619c592f4398fa18d7f1063548b8997e5d46b0e89ad6d" Jan 22 11:06:31 crc kubenswrapper[4975]: I0122 11:06:31.938408 4975 scope.go:117] "RemoveContainer" containerID="5cf8449da87971b26339e7ac5c7b5e19a6d1b97c5abb82e29be306bd194d7f2e" Jan 22 11:06:31 crc kubenswrapper[4975]: I0122 11:06:31.979610 4975 scope.go:117] "RemoveContainer" containerID="302f4fb16f3e5775c821ccb4e72c026b55f052b127345865eb92b19747da9366" Jan 22 11:06:32 crc kubenswrapper[4975]: I0122 11:06:32.022385 4975 scope.go:117] "RemoveContainer" containerID="ad1ee4e20a9d65eb86d33293edec49115a098261494983322f2b44ab17cf131b" Jan 22 11:06:32 crc kubenswrapper[4975]: I0122 11:06:32.100816 4975 scope.go:117] "RemoveContainer" containerID="1a7ad298b3c0975e0f4b13cb3bf93f04f6f28871ac05341459dea0561dd69c3a" Jan 22 11:06:32 crc kubenswrapper[4975]: I0122 11:06:32.147660 4975 scope.go:117] "RemoveContainer" containerID="b060affecd1b1037a6ec65f2c6bf1e98d7f1a8c8ce711986ba10c5735b28745e" Jan 22 11:06:34 crc kubenswrapper[4975]: I0122 11:06:34.067875 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mkhfs"] Jan 22 11:06:34 crc kubenswrapper[4975]: I0122 11:06:34.076501 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hr72x"] Jan 22 11:06:34 crc kubenswrapper[4975]: I0122 11:06:34.084992 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mkhfs"] Jan 22 11:06:34 crc kubenswrapper[4975]: I0122 11:06:34.097032 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n7gh8"] Jan 22 11:06:34 crc kubenswrapper[4975]: I0122 11:06:34.104841 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hr72x"] Jan 22 11:06:34 crc kubenswrapper[4975]: I0122 11:06:34.111840 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n7gh8"] Jan 22 11:06:35 crc kubenswrapper[4975]: I0122 11:06:35.755151 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:06:35 crc kubenswrapper[4975]: E0122 11:06:35.755972 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:06:35 crc kubenswrapper[4975]: I0122 11:06:35.770638 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18942582-238b-4674-831a-d1a2284c912c" path="/var/lib/kubelet/pods/18942582-238b-4674-831a-d1a2284c912c/volumes" Jan 22 11:06:35 crc kubenswrapper[4975]: I0122 11:06:35.771674 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590caacf-c377-4aed-abad-b927b28f51f5" path="/var/lib/kubelet/pods/590caacf-c377-4aed-abad-b927b28f51f5/volumes" Jan 22 11:06:35 crc kubenswrapper[4975]: I0122 11:06:35.772516 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7594ce51-7b3b-42e1-a47c-791edf907087" path="/var/lib/kubelet/pods/7594ce51-7b3b-42e1-a47c-791edf907087/volumes" Jan 22 11:06:46 crc kubenswrapper[4975]: I0122 11:06:46.027000 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-97z49"] Jan 22 11:06:46 crc kubenswrapper[4975]: I0122 11:06:46.043825 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-97z49"] Jan 22 11:06:47 crc kubenswrapper[4975]: I0122 11:06:47.785318 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbc90ac-2417-470d-84cf-ddc823f23e50" path="/var/lib/kubelet/pods/abbc90ac-2417-470d-84cf-ddc823f23e50/volumes" Jan 22 11:06:50 crc kubenswrapper[4975]: I0122 11:06:50.755315 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:06:50 crc kubenswrapper[4975]: E0122 11:06:50.756185 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:06:51 crc kubenswrapper[4975]: I0122 11:06:51.804960 4975 generic.go:334] "Generic (PLEG): container finished" podID="a2579a01-95bb-4c7c-8bf5-c0c81586e31d" containerID="e147ab213311914d678df603a1552d0478180daf262da7601f84bd07d1815417" exitCode=0 Jan 22 11:06:51 crc kubenswrapper[4975]: I0122 11:06:51.805008 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" event={"ID":"a2579a01-95bb-4c7c-8bf5-c0c81586e31d","Type":"ContainerDied","Data":"e147ab213311914d678df603a1552d0478180daf262da7601f84bd07d1815417"} Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.358790 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.523787 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-ssh-key-openstack-edpm-ipam\") pod \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.523856 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg7hf\" (UniqueName: \"kubernetes.io/projected/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-kube-api-access-vg7hf\") pod \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.524051 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-inventory\") pod \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\" (UID: \"a2579a01-95bb-4c7c-8bf5-c0c81586e31d\") " Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.531065 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-kube-api-access-vg7hf" (OuterVolumeSpecName: "kube-api-access-vg7hf") pod "a2579a01-95bb-4c7c-8bf5-c0c81586e31d" (UID: "a2579a01-95bb-4c7c-8bf5-c0c81586e31d"). InnerVolumeSpecName "kube-api-access-vg7hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.551723 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-inventory" (OuterVolumeSpecName: "inventory") pod "a2579a01-95bb-4c7c-8bf5-c0c81586e31d" (UID: "a2579a01-95bb-4c7c-8bf5-c0c81586e31d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.554738 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2579a01-95bb-4c7c-8bf5-c0c81586e31d" (UID: "a2579a01-95bb-4c7c-8bf5-c0c81586e31d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.625658 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.625688 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.625698 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg7hf\" (UniqueName: \"kubernetes.io/projected/a2579a01-95bb-4c7c-8bf5-c0c81586e31d-kube-api-access-vg7hf\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.831920 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" event={"ID":"a2579a01-95bb-4c7c-8bf5-c0c81586e31d","Type":"ContainerDied","Data":"229255cde1c8d2d6165db825356fd278140c06481681da304b0ecdc9f6e66a77"} Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.831977 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="229255cde1c8d2d6165db825356fd278140c06481681da304b0ecdc9f6e66a77" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.832011 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l7t47" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.955913 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx"] Jan 22 11:06:53 crc kubenswrapper[4975]: E0122 11:06:53.956409 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2579a01-95bb-4c7c-8bf5-c0c81586e31d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.956428 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2579a01-95bb-4c7c-8bf5-c0c81586e31d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.956691 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2579a01-95bb-4c7c-8bf5-c0c81586e31d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.957506 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.964214 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx"] Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.982695 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.982761 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.982696 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:06:53 crc kubenswrapper[4975]: I0122 11:06:53.983074 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.134848 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.134944 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mp5x\" (UniqueName: \"kubernetes.io/projected/a3494e98-461d-491b-96c0-05c360412e74-kube-api-access-9mp5x\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.135085 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.236998 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.237092 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mp5x\" (UniqueName: \"kubernetes.io/projected/a3494e98-461d-491b-96c0-05c360412e74-kube-api-access-9mp5x\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.237183 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.245108 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.247109 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.264301 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mp5x\" (UniqueName: \"kubernetes.io/projected/a3494e98-461d-491b-96c0-05c360412e74-kube-api-access-9mp5x\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-d5spx\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.306930 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:06:54 crc kubenswrapper[4975]: I0122 11:06:54.905192 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx"] Jan 22 11:06:55 crc kubenswrapper[4975]: I0122 11:06:55.858777 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" event={"ID":"a3494e98-461d-491b-96c0-05c360412e74","Type":"ContainerStarted","Data":"621c52bfe71e63cf51e2376611234cd08020b24caff186a33d470f1ad6404637"} Jan 22 11:06:55 crc kubenswrapper[4975]: I0122 11:06:55.859356 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" event={"ID":"a3494e98-461d-491b-96c0-05c360412e74","Type":"ContainerStarted","Data":"a773428c2279542f09732452ab9399e030f36c236c825e08e7b480b8e2613ee3"} Jan 22 11:06:58 crc kubenswrapper[4975]: I0122 11:06:58.037570 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" podStartSLOduration=4.567423241 podStartE2EDuration="5.037553004s" podCreationTimestamp="2026-01-22 11:06:53 +0000 UTC" firstStartedPulling="2026-01-22 11:06:54.903816088 +0000 UTC m=+1807.561852198" lastFinishedPulling="2026-01-22 11:06:55.373945811 +0000 UTC m=+1808.031981961" observedRunningTime="2026-01-22 11:06:55.873365015 +0000 UTC m=+1808.531401125" watchObservedRunningTime="2026-01-22 11:06:58.037553004 +0000 UTC m=+1810.695589114" Jan 22 11:06:58 crc kubenswrapper[4975]: I0122 11:06:58.040747 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x24jb"] Jan 22 11:06:58 crc kubenswrapper[4975]: I0122 11:06:58.048704 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x24jb"] Jan 22 11:06:59 crc kubenswrapper[4975]: I0122 11:06:59.770058 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58e68a6-dee2-49a4-be2d-e6b1ca69b19f" path="/var/lib/kubelet/pods/f58e68a6-dee2-49a4-be2d-e6b1ca69b19f/volumes" Jan 22 11:07:04 crc kubenswrapper[4975]: I0122 11:07:04.754829 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:07:04 crc kubenswrapper[4975]: E0122 11:07:04.755703 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:07:17 crc kubenswrapper[4975]: I0122 11:07:17.764076 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:07:17 crc kubenswrapper[4975]: E0122 11:07:17.765028 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:07:29 crc kubenswrapper[4975]: I0122 11:07:29.755216 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:07:29 crc kubenswrapper[4975]: E0122 11:07:29.756473 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:07:32 crc kubenswrapper[4975]: I0122 11:07:32.288612 4975 scope.go:117] "RemoveContainer" containerID="7998680a9cd7cf7a32681593119d7938b68da8ed5d638ba92eb6f9105c0b83cc" Jan 22 11:07:32 crc kubenswrapper[4975]: I0122 11:07:32.332710 4975 scope.go:117] "RemoveContainer" containerID="9b57a69c63f587768367e76faa3583720078a51a532debd6dacd7cc730572cb7" Jan 22 11:07:32 crc kubenswrapper[4975]: I0122 11:07:32.384321 4975 scope.go:117] "RemoveContainer" containerID="68c44397d85078ceb1f615b35b0f4489644aee354873ff41d24b5c9c40d7d2a8" Jan 22 11:07:32 crc kubenswrapper[4975]: I0122 11:07:32.420771 4975 scope.go:117] "RemoveContainer" containerID="9069ee30524a475c7fcf94d4e779314e50eda7feb4961d18af59b94b2d3b53d5" Jan 22 11:07:32 crc kubenswrapper[4975]: I0122 11:07:32.493932 4975 scope.go:117] "RemoveContainer" containerID="fa3f54390ce11f410f44674f2f87ab26787fac6ef23dfb8f27ec565a28a1eacf" Jan 22 11:07:40 crc kubenswrapper[4975]: I0122 11:07:40.755753 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:07:40 crc kubenswrapper[4975]: E0122 11:07:40.756722 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.057711 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5e0b-account-create-update-gtwjv"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.066984 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3e3d-account-create-update-gnkwp"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.079405 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z65jj"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.090187 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5e0b-account-create-update-gtwjv"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.097934 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-25c5-account-create-update-96jwm"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.106154 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ct85p"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.112721 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-n44m9"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.119833 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z65jj"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.127244 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3e3d-account-create-update-gnkwp"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.134518 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-25c5-account-create-update-96jwm"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.141016 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-n44m9"] Jan 22 11:07:48 crc kubenswrapper[4975]: I0122 11:07:48.146803 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ct85p"] Jan 22 11:07:49 crc kubenswrapper[4975]: I0122 11:07:49.766830 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eefbf47-b3c8-4491-b832-dd650404b5e4" path="/var/lib/kubelet/pods/0eefbf47-b3c8-4491-b832-dd650404b5e4/volumes" Jan 22 11:07:49 crc kubenswrapper[4975]: I0122 11:07:49.767465 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ade78c-f910-4f71-98ef-6751e66bcd1e" path="/var/lib/kubelet/pods/10ade78c-f910-4f71-98ef-6751e66bcd1e/volumes" Jan 22 11:07:49 crc kubenswrapper[4975]: I0122 11:07:49.768077 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d9a9f89-8c5f-4284-82d5-083fb294f744" path="/var/lib/kubelet/pods/5d9a9f89-8c5f-4284-82d5-083fb294f744/volumes" Jan 22 11:07:49 crc kubenswrapper[4975]: I0122 11:07:49.768614 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a6f5135-980c-42f0-91fb-fcc46f662331" path="/var/lib/kubelet/pods/8a6f5135-980c-42f0-91fb-fcc46f662331/volumes" Jan 22 11:07:49 crc kubenswrapper[4975]: I0122 11:07:49.769724 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8fefa1-f558-4f60-80a2-053651036280" path="/var/lib/kubelet/pods/8e8fefa1-f558-4f60-80a2-053651036280/volumes" Jan 22 11:07:49 crc kubenswrapper[4975]: I0122 11:07:49.770301 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67122ac-c96e-4dfb-9b46-59e75af7ecad" path="/var/lib/kubelet/pods/c67122ac-c96e-4dfb-9b46-59e75af7ecad/volumes" Jan 22 11:07:51 crc kubenswrapper[4975]: I0122 11:07:51.755209 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:07:51 crc kubenswrapper[4975]: E0122 11:07:51.755808 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:08:05 crc kubenswrapper[4975]: I0122 11:08:05.755194 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:08:05 crc kubenswrapper[4975]: E0122 11:08:05.756261 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:08:16 crc kubenswrapper[4975]: I0122 11:08:16.703324 4975 generic.go:334] "Generic (PLEG): container finished" podID="a3494e98-461d-491b-96c0-05c360412e74" containerID="621c52bfe71e63cf51e2376611234cd08020b24caff186a33d470f1ad6404637" exitCode=0 Jan 22 11:08:16 crc kubenswrapper[4975]: I0122 11:08:16.703368 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" event={"ID":"a3494e98-461d-491b-96c0-05c360412e74","Type":"ContainerDied","Data":"621c52bfe71e63cf51e2376611234cd08020b24caff186a33d470f1ad6404637"} Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.225612 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.302403 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-ssh-key-openstack-edpm-ipam\") pod \"a3494e98-461d-491b-96c0-05c360412e74\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.302541 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mp5x\" (UniqueName: \"kubernetes.io/projected/a3494e98-461d-491b-96c0-05c360412e74-kube-api-access-9mp5x\") pod \"a3494e98-461d-491b-96c0-05c360412e74\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.302594 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-inventory\") pod \"a3494e98-461d-491b-96c0-05c360412e74\" (UID: \"a3494e98-461d-491b-96c0-05c360412e74\") " Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.309557 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3494e98-461d-491b-96c0-05c360412e74-kube-api-access-9mp5x" (OuterVolumeSpecName: "kube-api-access-9mp5x") pod "a3494e98-461d-491b-96c0-05c360412e74" (UID: "a3494e98-461d-491b-96c0-05c360412e74"). InnerVolumeSpecName "kube-api-access-9mp5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.329523 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a3494e98-461d-491b-96c0-05c360412e74" (UID: "a3494e98-461d-491b-96c0-05c360412e74"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.340757 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-inventory" (OuterVolumeSpecName: "inventory") pod "a3494e98-461d-491b-96c0-05c360412e74" (UID: "a3494e98-461d-491b-96c0-05c360412e74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.409272 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mp5x\" (UniqueName: \"kubernetes.io/projected/a3494e98-461d-491b-96c0-05c360412e74-kube-api-access-9mp5x\") on node \"crc\" DevicePath \"\"" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.409313 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.409324 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3494e98-461d-491b-96c0-05c360412e74-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.738233 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" event={"ID":"a3494e98-461d-491b-96c0-05c360412e74","Type":"ContainerDied","Data":"a773428c2279542f09732452ab9399e030f36c236c825e08e7b480b8e2613ee3"} Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.738305 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a773428c2279542f09732452ab9399e030f36c236c825e08e7b480b8e2613ee3" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.738375 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-d5spx" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.845779 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r"] Jan 22 11:08:18 crc kubenswrapper[4975]: E0122 11:08:18.846699 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3494e98-461d-491b-96c0-05c360412e74" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.846730 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3494e98-461d-491b-96c0-05c360412e74" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.847091 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3494e98-461d-491b-96c0-05c360412e74" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.848251 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.851565 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.851634 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.851941 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.853271 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.857469 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r"] Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.928064 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtwc\" (UniqueName: \"kubernetes.io/projected/1f779aa8-c70d-46ba-a278-b4062a5ddd01-kube-api-access-9wtwc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.928178 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:18 crc kubenswrapper[4975]: I0122 11:08:18.928293 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.030303 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.030896 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtwc\" (UniqueName: \"kubernetes.io/projected/1f779aa8-c70d-46ba-a278-b4062a5ddd01-kube-api-access-9wtwc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.031141 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.036508 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.043843 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.050231 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtwc\" (UniqueName: \"kubernetes.io/projected/1f779aa8-c70d-46ba-a278-b4062a5ddd01-kube-api-access-9wtwc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bk27r\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.177938 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:19 crc kubenswrapper[4975]: I0122 11:08:19.842957 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r"] Jan 22 11:08:20 crc kubenswrapper[4975]: I0122 11:08:20.042708 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nqlnc"] Jan 22 11:08:20 crc kubenswrapper[4975]: I0122 11:08:20.049749 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nqlnc"] Jan 22 11:08:20 crc kubenswrapper[4975]: I0122 11:08:20.754843 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" event={"ID":"1f779aa8-c70d-46ba-a278-b4062a5ddd01","Type":"ContainerStarted","Data":"170da8675d7dbfd0bdbad658c061d11ee9b4b3d89ad2d67d1b895bd41930206b"} Jan 22 11:08:20 crc kubenswrapper[4975]: I0122 11:08:20.755142 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" event={"ID":"1f779aa8-c70d-46ba-a278-b4062a5ddd01","Type":"ContainerStarted","Data":"328df6679e6a165e4fa96afb8db0ee5265f8c48b440ca21d98d395ac4a494e35"} Jan 22 11:08:20 crc kubenswrapper[4975]: I0122 11:08:20.754876 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:08:20 crc kubenswrapper[4975]: E0122 11:08:20.755386 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:08:20 crc kubenswrapper[4975]: I0122 11:08:20.768902 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" podStartSLOduration=2.338367077 podStartE2EDuration="2.768885958s" podCreationTimestamp="2026-01-22 11:08:18 +0000 UTC" firstStartedPulling="2026-01-22 11:08:19.844842865 +0000 UTC m=+1892.502878975" lastFinishedPulling="2026-01-22 11:08:20.275361746 +0000 UTC m=+1892.933397856" observedRunningTime="2026-01-22 11:08:20.767741676 +0000 UTC m=+1893.425777786" watchObservedRunningTime="2026-01-22 11:08:20.768885958 +0000 UTC m=+1893.426922068" Jan 22 11:08:21 crc kubenswrapper[4975]: I0122 11:08:21.799967 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422220a0-7220-4f6a-8d8b-46fa92e6082a" path="/var/lib/kubelet/pods/422220a0-7220-4f6a-8d8b-46fa92e6082a/volumes" Jan 22 11:08:25 crc kubenswrapper[4975]: I0122 11:08:25.824684 4975 generic.go:334] "Generic (PLEG): container finished" podID="1f779aa8-c70d-46ba-a278-b4062a5ddd01" containerID="170da8675d7dbfd0bdbad658c061d11ee9b4b3d89ad2d67d1b895bd41930206b" exitCode=0 Jan 22 11:08:25 crc kubenswrapper[4975]: I0122 11:08:25.824816 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" event={"ID":"1f779aa8-c70d-46ba-a278-b4062a5ddd01","Type":"ContainerDied","Data":"170da8675d7dbfd0bdbad658c061d11ee9b4b3d89ad2d67d1b895bd41930206b"} Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.306426 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.427509 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-inventory\") pod \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.427620 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-ssh-key-openstack-edpm-ipam\") pod \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.427733 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wtwc\" (UniqueName: \"kubernetes.io/projected/1f779aa8-c70d-46ba-a278-b4062a5ddd01-kube-api-access-9wtwc\") pod \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\" (UID: \"1f779aa8-c70d-46ba-a278-b4062a5ddd01\") " Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.436851 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f779aa8-c70d-46ba-a278-b4062a5ddd01-kube-api-access-9wtwc" (OuterVolumeSpecName: "kube-api-access-9wtwc") pod "1f779aa8-c70d-46ba-a278-b4062a5ddd01" (UID: "1f779aa8-c70d-46ba-a278-b4062a5ddd01"). InnerVolumeSpecName "kube-api-access-9wtwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.456715 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f779aa8-c70d-46ba-a278-b4062a5ddd01" (UID: "1f779aa8-c70d-46ba-a278-b4062a5ddd01"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.464730 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-inventory" (OuterVolumeSpecName: "inventory") pod "1f779aa8-c70d-46ba-a278-b4062a5ddd01" (UID: "1f779aa8-c70d-46ba-a278-b4062a5ddd01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.530561 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wtwc\" (UniqueName: \"kubernetes.io/projected/1f779aa8-c70d-46ba-a278-b4062a5ddd01-kube-api-access-9wtwc\") on node \"crc\" DevicePath \"\"" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.530591 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.530911 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f779aa8-c70d-46ba-a278-b4062a5ddd01-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.848150 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" event={"ID":"1f779aa8-c70d-46ba-a278-b4062a5ddd01","Type":"ContainerDied","Data":"328df6679e6a165e4fa96afb8db0ee5265f8c48b440ca21d98d395ac4a494e35"} Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.848228 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="328df6679e6a165e4fa96afb8db0ee5265f8c48b440ca21d98d395ac4a494e35" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.848257 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bk27r" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.940170 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz"] Jan 22 11:08:27 crc kubenswrapper[4975]: E0122 11:08:27.940684 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f779aa8-c70d-46ba-a278-b4062a5ddd01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.940714 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f779aa8-c70d-46ba-a278-b4062a5ddd01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.941020 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f779aa8-c70d-46ba-a278-b4062a5ddd01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.941845 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.944531 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.944812 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.945052 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.945262 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:08:27 crc kubenswrapper[4975]: I0122 11:08:27.960040 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz"] Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.041427 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.041572 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9sm\" (UniqueName: \"kubernetes.io/projected/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-kube-api-access-vf9sm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.041615 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.143969 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.144227 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf9sm\" (UniqueName: \"kubernetes.io/projected/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-kube-api-access-vf9sm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.144327 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.156230 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.156907 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.176524 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf9sm\" (UniqueName: \"kubernetes.io/projected/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-kube-api-access-vf9sm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tzwgz\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.264871 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.836242 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz"] Jan 22 11:08:28 crc kubenswrapper[4975]: I0122 11:08:28.855214 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" event={"ID":"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4","Type":"ContainerStarted","Data":"04e843ca370ca2cfd0c178b1c28d3984ced0b779c9d300b28e94c74d4fac7c45"} Jan 22 11:08:29 crc kubenswrapper[4975]: I0122 11:08:29.866550 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" event={"ID":"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4","Type":"ContainerStarted","Data":"746928712feb4103c646945c6c51599e0a36e47ef578702a14fb8221f2725bdf"} Jan 22 11:08:29 crc kubenswrapper[4975]: I0122 11:08:29.893704 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" podStartSLOduration=2.510222943 podStartE2EDuration="2.89365632s" podCreationTimestamp="2026-01-22 11:08:27 +0000 UTC" firstStartedPulling="2026-01-22 11:08:28.839350859 +0000 UTC m=+1901.497386979" lastFinishedPulling="2026-01-22 11:08:29.222784226 +0000 UTC m=+1901.880820356" observedRunningTime="2026-01-22 11:08:29.886114065 +0000 UTC m=+1902.544150225" watchObservedRunningTime="2026-01-22 11:08:29.89365632 +0000 UTC m=+1902.551692430" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.657001 4975 scope.go:117] "RemoveContainer" containerID="70e48d0de8b77be468d6210a2d02f13c3bece4f0f0b97fd2f3e79a945e29ee27" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.678470 4975 scope.go:117] "RemoveContainer" containerID="d602bc37ffa546cf9882c92428843419799b1c297b338c1a8ad7ecc2632418f8" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.730289 4975 scope.go:117] "RemoveContainer" containerID="4b6b92a96a5f6936ddac14f154cfc5669dd985c8533925d7f3c1a2f283c0e190" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.771285 4975 scope.go:117] "RemoveContainer" containerID="8ee2ac45009c61ee6247b9229c8e5887129712e0d41c80ac13b6f679b29b2925" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.843247 4975 scope.go:117] "RemoveContainer" containerID="008495f8ac806fa58ef0aaf021d944d50e5b2b93dcc819ecd082de554f6dc95b" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.862726 4975 scope.go:117] "RemoveContainer" containerID="4216a7271788481db4cef95838b69d2e8b3ccff9b021c9c6a8f32b981a488540" Jan 22 11:08:32 crc kubenswrapper[4975]: I0122 11:08:32.907107 4975 scope.go:117] "RemoveContainer" containerID="895f18587f09c87d7fa49a23ca9a99af702529597aa194c11f68e1f788fb8c0f" Jan 22 11:08:33 crc kubenswrapper[4975]: I0122 11:08:33.755475 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:08:34 crc kubenswrapper[4975]: I0122 11:08:34.933749 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"f456fed653d12e00848456fc08269bcad2671aac88dedee7d07f3b2fd5e96016"} Jan 22 11:08:41 crc kubenswrapper[4975]: I0122 11:08:41.046992 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zcnd"] Jan 22 11:08:41 crc kubenswrapper[4975]: I0122 11:08:41.059678 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zcnd"] Jan 22 11:08:41 crc kubenswrapper[4975]: I0122 11:08:41.765168 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef0e4bb-4778-4690-abc6-21c903eb69e3" path="/var/lib/kubelet/pods/6ef0e4bb-4778-4690-abc6-21c903eb69e3/volumes" Jan 22 11:08:43 crc kubenswrapper[4975]: I0122 11:08:43.039106 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2m6rj"] Jan 22 11:08:43 crc kubenswrapper[4975]: I0122 11:08:43.054811 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2m6rj"] Jan 22 11:08:43 crc kubenswrapper[4975]: I0122 11:08:43.768393 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39939ecf-15f1-4cdc-a3d5-cf09de7c3b86" path="/var/lib/kubelet/pods/39939ecf-15f1-4cdc-a3d5-cf09de7c3b86/volumes" Jan 22 11:09:11 crc kubenswrapper[4975]: I0122 11:09:11.291197 4975 generic.go:334] "Generic (PLEG): container finished" podID="3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" containerID="746928712feb4103c646945c6c51599e0a36e47ef578702a14fb8221f2725bdf" exitCode=0 Jan 22 11:09:11 crc kubenswrapper[4975]: I0122 11:09:11.291317 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" event={"ID":"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4","Type":"ContainerDied","Data":"746928712feb4103c646945c6c51599e0a36e47ef578702a14fb8221f2725bdf"} Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.713645 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.832387 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-ssh-key-openstack-edpm-ipam\") pod \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.832432 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf9sm\" (UniqueName: \"kubernetes.io/projected/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-kube-api-access-vf9sm\") pod \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.832463 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-inventory\") pod \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\" (UID: \"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4\") " Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.838427 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-kube-api-access-vf9sm" (OuterVolumeSpecName: "kube-api-access-vf9sm") pod "3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" (UID: "3b6f2238-afde-4cfc-83aa-c5d56a2afeb4"). InnerVolumeSpecName "kube-api-access-vf9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.863536 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" (UID: "3b6f2238-afde-4cfc-83aa-c5d56a2afeb4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.887047 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-inventory" (OuterVolumeSpecName: "inventory") pod "3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" (UID: "3b6f2238-afde-4cfc-83aa-c5d56a2afeb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.935398 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.935445 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf9sm\" (UniqueName: \"kubernetes.io/projected/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-kube-api-access-vf9sm\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:12 crc kubenswrapper[4975]: I0122 11:09:12.935464 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6f2238-afde-4cfc-83aa-c5d56a2afeb4-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.309882 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" event={"ID":"3b6f2238-afde-4cfc-83aa-c5d56a2afeb4","Type":"ContainerDied","Data":"04e843ca370ca2cfd0c178b1c28d3984ced0b779c9d300b28e94c74d4fac7c45"} Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.310152 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e843ca370ca2cfd0c178b1c28d3984ced0b779c9d300b28e94c74d4fac7c45" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.309964 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tzwgz" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.432910 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6"] Jan 22 11:09:13 crc kubenswrapper[4975]: E0122 11:09:13.433357 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.433383 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.433640 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6f2238-afde-4cfc-83aa-c5d56a2afeb4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.434311 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.437466 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.437490 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.437563 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.438592 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.444940 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.445029 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5nr\" (UniqueName: \"kubernetes.io/projected/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-kube-api-access-zf5nr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.445143 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.447317 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6"] Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.546862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.547170 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5nr\" (UniqueName: \"kubernetes.io/projected/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-kube-api-access-zf5nr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.547201 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.550761 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.550810 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.562870 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5nr\" (UniqueName: \"kubernetes.io/projected/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-kube-api-access-zf5nr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m82g6\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:13 crc kubenswrapper[4975]: I0122 11:09:13.805147 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:09:14 crc kubenswrapper[4975]: I0122 11:09:14.349731 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6"] Jan 22 11:09:15 crc kubenswrapper[4975]: I0122 11:09:15.328261 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" event={"ID":"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00","Type":"ContainerStarted","Data":"2bf4c40461e8b49ed519d3bb0e13acbcd45de83498484cd9bf988fb58dbb9e9c"} Jan 22 11:09:16 crc kubenswrapper[4975]: I0122 11:09:16.338045 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" event={"ID":"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00","Type":"ContainerStarted","Data":"4f7519fe47ffd689b18b193f0622d7e2cca69b43e2fb675ff991ecc10d6cb1af"} Jan 22 11:09:16 crc kubenswrapper[4975]: I0122 11:09:16.354792 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" podStartSLOduration=2.676131083 podStartE2EDuration="3.354770917s" podCreationTimestamp="2026-01-22 11:09:13 +0000 UTC" firstStartedPulling="2026-01-22 11:09:14.35524193 +0000 UTC m=+1947.013278050" lastFinishedPulling="2026-01-22 11:09:15.033881774 +0000 UTC m=+1947.691917884" observedRunningTime="2026-01-22 11:09:16.353411511 +0000 UTC m=+1949.011447631" watchObservedRunningTime="2026-01-22 11:09:16.354770917 +0000 UTC m=+1949.012807047" Jan 22 11:09:28 crc kubenswrapper[4975]: I0122 11:09:28.058216 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hj9wd"] Jan 22 11:09:28 crc kubenswrapper[4975]: I0122 11:09:28.068795 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hj9wd"] Jan 22 11:09:29 crc kubenswrapper[4975]: I0122 11:09:29.788211 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4" path="/var/lib/kubelet/pods/9a6c6b14-bfb8-40f9-a6f1-57700bbef7e4/volumes" Jan 22 11:09:33 crc kubenswrapper[4975]: I0122 11:09:33.044916 4975 scope.go:117] "RemoveContainer" containerID="928298a29e62cc725f9d6f84b054a0407f4c86450690746034b7781994797911" Jan 22 11:09:33 crc kubenswrapper[4975]: I0122 11:09:33.096200 4975 scope.go:117] "RemoveContainer" containerID="fa3abc8565a2f06bff11a63285a48dae0928de32844c130ccfb00840267b15d9" Jan 22 11:09:33 crc kubenswrapper[4975]: I0122 11:09:33.153417 4975 scope.go:117] "RemoveContainer" containerID="573c450cb51344bdd22a1e3ab36b330a3420972d6c26172b9cdcae5f0a10a5c7" Jan 22 11:10:10 crc kubenswrapper[4975]: I0122 11:10:10.871146 4975 generic.go:334] "Generic (PLEG): container finished" podID="fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" containerID="4f7519fe47ffd689b18b193f0622d7e2cca69b43e2fb675ff991ecc10d6cb1af" exitCode=0 Jan 22 11:10:10 crc kubenswrapper[4975]: I0122 11:10:10.871215 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" event={"ID":"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00","Type":"ContainerDied","Data":"4f7519fe47ffd689b18b193f0622d7e2cca69b43e2fb675ff991ecc10d6cb1af"} Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.308999 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.358467 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-inventory\") pod \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.358641 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-ssh-key-openstack-edpm-ipam\") pod \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.358771 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5nr\" (UniqueName: \"kubernetes.io/projected/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-kube-api-access-zf5nr\") pod \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\" (UID: \"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00\") " Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.367640 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-kube-api-access-zf5nr" (OuterVolumeSpecName: "kube-api-access-zf5nr") pod "fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" (UID: "fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00"). InnerVolumeSpecName "kube-api-access-zf5nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.392249 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-inventory" (OuterVolumeSpecName: "inventory") pod "fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" (UID: "fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.393060 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" (UID: "fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.460806 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5nr\" (UniqueName: \"kubernetes.io/projected/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-kube-api-access-zf5nr\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.460845 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.460858 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.895681 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" event={"ID":"fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00","Type":"ContainerDied","Data":"2bf4c40461e8b49ed519d3bb0e13acbcd45de83498484cd9bf988fb58dbb9e9c"} Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.895736 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf4c40461e8b49ed519d3bb0e13acbcd45de83498484cd9bf988fb58dbb9e9c" Jan 22 11:10:12 crc kubenswrapper[4975]: I0122 11:10:12.895772 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m82g6" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.005430 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fs85s"] Jan 22 11:10:13 crc kubenswrapper[4975]: E0122 11:10:13.006031 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.006061 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.006466 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.007480 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.010873 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.011029 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.011048 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.010925 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.019508 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fs85s"] Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.071574 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.071811 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9789g\" (UniqueName: \"kubernetes.io/projected/f3c65871-e82d-4687-99c7-4af5194d7c98-kube-api-access-9789g\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.071950 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.173962 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.174230 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.174282 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9789g\" (UniqueName: \"kubernetes.io/projected/f3c65871-e82d-4687-99c7-4af5194d7c98-kube-api-access-9789g\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.180396 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.182978 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.203527 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9789g\" (UniqueName: \"kubernetes.io/projected/f3c65871-e82d-4687-99c7-4af5194d7c98-kube-api-access-9789g\") pod \"ssh-known-hosts-edpm-deployment-fs85s\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.345913 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.954229 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fs85s"] Jan 22 11:10:13 crc kubenswrapper[4975]: I0122 11:10:13.971571 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:10:14 crc kubenswrapper[4975]: I0122 11:10:14.917227 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" event={"ID":"f3c65871-e82d-4687-99c7-4af5194d7c98","Type":"ContainerStarted","Data":"3f8a9bdf349119f6a42262dc228434faaff1ff4fa13917b75774837b2d8b3f83"} Jan 22 11:10:15 crc kubenswrapper[4975]: I0122 11:10:15.934702 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" event={"ID":"f3c65871-e82d-4687-99c7-4af5194d7c98","Type":"ContainerStarted","Data":"425b35c7698ee82be746970471194c70cca548b51ac4a59b9d0b0b022c04351e"} Jan 22 11:10:15 crc kubenswrapper[4975]: I0122 11:10:15.961314 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" podStartSLOduration=2.809972835 podStartE2EDuration="3.961275518s" podCreationTimestamp="2026-01-22 11:10:12 +0000 UTC" firstStartedPulling="2026-01-22 11:10:13.971292068 +0000 UTC m=+2006.629328188" lastFinishedPulling="2026-01-22 11:10:15.122594751 +0000 UTC m=+2007.780630871" observedRunningTime="2026-01-22 11:10:15.959994451 +0000 UTC m=+2008.618030581" watchObservedRunningTime="2026-01-22 11:10:15.961275518 +0000 UTC m=+2008.619311628" Jan 22 11:10:23 crc kubenswrapper[4975]: I0122 11:10:23.021964 4975 generic.go:334] "Generic (PLEG): container finished" podID="f3c65871-e82d-4687-99c7-4af5194d7c98" containerID="425b35c7698ee82be746970471194c70cca548b51ac4a59b9d0b0b022c04351e" exitCode=0 Jan 22 11:10:23 crc kubenswrapper[4975]: I0122 11:10:23.022067 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" event={"ID":"f3c65871-e82d-4687-99c7-4af5194d7c98","Type":"ContainerDied","Data":"425b35c7698ee82be746970471194c70cca548b51ac4a59b9d0b0b022c04351e"} Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.549282 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.622994 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9789g\" (UniqueName: \"kubernetes.io/projected/f3c65871-e82d-4687-99c7-4af5194d7c98-kube-api-access-9789g\") pod \"f3c65871-e82d-4687-99c7-4af5194d7c98\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.623103 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-inventory-0\") pod \"f3c65871-e82d-4687-99c7-4af5194d7c98\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.623292 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-ssh-key-openstack-edpm-ipam\") pod \"f3c65871-e82d-4687-99c7-4af5194d7c98\" (UID: \"f3c65871-e82d-4687-99c7-4af5194d7c98\") " Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.629580 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c65871-e82d-4687-99c7-4af5194d7c98-kube-api-access-9789g" (OuterVolumeSpecName: "kube-api-access-9789g") pod "f3c65871-e82d-4687-99c7-4af5194d7c98" (UID: "f3c65871-e82d-4687-99c7-4af5194d7c98"). InnerVolumeSpecName "kube-api-access-9789g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.648533 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f3c65871-e82d-4687-99c7-4af5194d7c98" (UID: "f3c65871-e82d-4687-99c7-4af5194d7c98"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.661784 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f3c65871-e82d-4687-99c7-4af5194d7c98" (UID: "f3c65871-e82d-4687-99c7-4af5194d7c98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.725326 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9789g\" (UniqueName: \"kubernetes.io/projected/f3c65871-e82d-4687-99c7-4af5194d7c98-kube-api-access-9789g\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.725568 4975 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:24 crc kubenswrapper[4975]: I0122 11:10:24.725579 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3c65871-e82d-4687-99c7-4af5194d7c98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.049033 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" event={"ID":"f3c65871-e82d-4687-99c7-4af5194d7c98","Type":"ContainerDied","Data":"3f8a9bdf349119f6a42262dc228434faaff1ff4fa13917b75774837b2d8b3f83"} Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.049456 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f8a9bdf349119f6a42262dc228434faaff1ff4fa13917b75774837b2d8b3f83" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.049124 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fs85s" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.149685 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq"] Jan 22 11:10:25 crc kubenswrapper[4975]: E0122 11:10:25.150134 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c65871-e82d-4687-99c7-4af5194d7c98" containerName="ssh-known-hosts-edpm-deployment" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.150154 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c65871-e82d-4687-99c7-4af5194d7c98" containerName="ssh-known-hosts-edpm-deployment" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.150336 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c65871-e82d-4687-99c7-4af5194d7c98" containerName="ssh-known-hosts-edpm-deployment" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.150949 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.154318 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.154550 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.154867 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.155032 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.158070 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq"] Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.235070 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.235161 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59kb\" (UniqueName: \"kubernetes.io/projected/36d854d2-b0f2-4864-b614-715d94ba5631-kube-api-access-z59kb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.235318 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.337190 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59kb\" (UniqueName: \"kubernetes.io/projected/36d854d2-b0f2-4864-b614-715d94ba5631-kube-api-access-z59kb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.337268 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.337403 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.340783 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.342709 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.354281 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59kb\" (UniqueName: \"kubernetes.io/projected/36d854d2-b0f2-4864-b614-715d94ba5631-kube-api-access-z59kb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-flsmq\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:25 crc kubenswrapper[4975]: I0122 11:10:25.526695 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:26 crc kubenswrapper[4975]: I0122 11:10:26.109215 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq"] Jan 22 11:10:26 crc kubenswrapper[4975]: W0122 11:10:26.112853 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d854d2_b0f2_4864_b614_715d94ba5631.slice/crio-d9f45290233ff72e3f22b0961b90f491f4b8bc4316932aafa5a3be4d24075c99 WatchSource:0}: Error finding container d9f45290233ff72e3f22b0961b90f491f4b8bc4316932aafa5a3be4d24075c99: Status 404 returned error can't find the container with id d9f45290233ff72e3f22b0961b90f491f4b8bc4316932aafa5a3be4d24075c99 Jan 22 11:10:27 crc kubenswrapper[4975]: I0122 11:10:27.075197 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" event={"ID":"36d854d2-b0f2-4864-b614-715d94ba5631","Type":"ContainerStarted","Data":"b2c419a3a02aa0642a49b65c947bbbe3cbae615fa1700c3dc5a3049bd3e5f37e"} Jan 22 11:10:27 crc kubenswrapper[4975]: I0122 11:10:27.075963 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" event={"ID":"36d854d2-b0f2-4864-b614-715d94ba5631","Type":"ContainerStarted","Data":"d9f45290233ff72e3f22b0961b90f491f4b8bc4316932aafa5a3be4d24075c99"} Jan 22 11:10:27 crc kubenswrapper[4975]: I0122 11:10:27.094575 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" podStartSLOduration=1.65963838 podStartE2EDuration="2.094559138s" podCreationTimestamp="2026-01-22 11:10:25 +0000 UTC" firstStartedPulling="2026-01-22 11:10:26.116065341 +0000 UTC m=+2018.774101461" lastFinishedPulling="2026-01-22 11:10:26.550986109 +0000 UTC m=+2019.209022219" observedRunningTime="2026-01-22 11:10:27.091464051 +0000 UTC m=+2019.749500161" watchObservedRunningTime="2026-01-22 11:10:27.094559138 +0000 UTC m=+2019.752595248" Jan 22 11:10:35 crc kubenswrapper[4975]: I0122 11:10:35.145403 4975 generic.go:334] "Generic (PLEG): container finished" podID="36d854d2-b0f2-4864-b614-715d94ba5631" containerID="b2c419a3a02aa0642a49b65c947bbbe3cbae615fa1700c3dc5a3049bd3e5f37e" exitCode=0 Jan 22 11:10:35 crc kubenswrapper[4975]: I0122 11:10:35.145515 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" event={"ID":"36d854d2-b0f2-4864-b614-715d94ba5631","Type":"ContainerDied","Data":"b2c419a3a02aa0642a49b65c947bbbe3cbae615fa1700c3dc5a3049bd3e5f37e"} Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.692506 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.886909 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-inventory\") pod \"36d854d2-b0f2-4864-b614-715d94ba5631\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.887062 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59kb\" (UniqueName: \"kubernetes.io/projected/36d854d2-b0f2-4864-b614-715d94ba5631-kube-api-access-z59kb\") pod \"36d854d2-b0f2-4864-b614-715d94ba5631\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.887103 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-ssh-key-openstack-edpm-ipam\") pod \"36d854d2-b0f2-4864-b614-715d94ba5631\" (UID: \"36d854d2-b0f2-4864-b614-715d94ba5631\") " Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.893310 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d854d2-b0f2-4864-b614-715d94ba5631-kube-api-access-z59kb" (OuterVolumeSpecName: "kube-api-access-z59kb") pod "36d854d2-b0f2-4864-b614-715d94ba5631" (UID: "36d854d2-b0f2-4864-b614-715d94ba5631"). InnerVolumeSpecName "kube-api-access-z59kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.914741 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36d854d2-b0f2-4864-b614-715d94ba5631" (UID: "36d854d2-b0f2-4864-b614-715d94ba5631"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.921491 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-inventory" (OuterVolumeSpecName: "inventory") pod "36d854d2-b0f2-4864-b614-715d94ba5631" (UID: "36d854d2-b0f2-4864-b614-715d94ba5631"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.989105 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59kb\" (UniqueName: \"kubernetes.io/projected/36d854d2-b0f2-4864-b614-715d94ba5631-kube-api-access-z59kb\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.989139 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:36 crc kubenswrapper[4975]: I0122 11:10:36.989151 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d854d2-b0f2-4864-b614-715d94ba5631-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.171404 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" event={"ID":"36d854d2-b0f2-4864-b614-715d94ba5631","Type":"ContainerDied","Data":"d9f45290233ff72e3f22b0961b90f491f4b8bc4316932aafa5a3be4d24075c99"} Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.171466 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f45290233ff72e3f22b0961b90f491f4b8bc4316932aafa5a3be4d24075c99" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.171544 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-flsmq" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.271001 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm"] Jan 22 11:10:37 crc kubenswrapper[4975]: E0122 11:10:37.273261 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d854d2-b0f2-4864-b614-715d94ba5631" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.273291 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d854d2-b0f2-4864-b614-715d94ba5631" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.273604 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d854d2-b0f2-4864-b614-715d94ba5631" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.274271 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.278415 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.278574 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.279671 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.279811 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.293412 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm"] Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.305607 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blb4q\" (UniqueName: \"kubernetes.io/projected/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-kube-api-access-blb4q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.305695 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.305824 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.407351 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blb4q\" (UniqueName: \"kubernetes.io/projected/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-kube-api-access-blb4q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.407613 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.407684 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.412595 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.412883 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.431817 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blb4q\" (UniqueName: \"kubernetes.io/projected/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-kube-api-access-blb4q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:37 crc kubenswrapper[4975]: I0122 11:10:37.607908 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:38 crc kubenswrapper[4975]: I0122 11:10:38.190153 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm"] Jan 22 11:10:39 crc kubenswrapper[4975]: I0122 11:10:39.188223 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" event={"ID":"9f5109b8-3671-4f96-8d9f-625b4c02b1b6","Type":"ContainerStarted","Data":"547eca7e6ada8b6a7eb701d68bf4310f74a96fc8ae1e5f9320629da4974d4ee9"} Jan 22 11:10:39 crc kubenswrapper[4975]: I0122 11:10:39.188580 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" event={"ID":"9f5109b8-3671-4f96-8d9f-625b4c02b1b6","Type":"ContainerStarted","Data":"4ad17e28f1a5160821b1e4e01c5ece5c2a1ef0ea20c50690cf00b5a5b7d373d2"} Jan 22 11:10:39 crc kubenswrapper[4975]: I0122 11:10:39.209567 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" podStartSLOduration=1.8115562170000001 podStartE2EDuration="2.209542686s" podCreationTimestamp="2026-01-22 11:10:37 +0000 UTC" firstStartedPulling="2026-01-22 11:10:38.198059084 +0000 UTC m=+2030.856095204" lastFinishedPulling="2026-01-22 11:10:38.596045563 +0000 UTC m=+2031.254081673" observedRunningTime="2026-01-22 11:10:39.208458155 +0000 UTC m=+2031.866494265" watchObservedRunningTime="2026-01-22 11:10:39.209542686 +0000 UTC m=+2031.867578806" Jan 22 11:10:49 crc kubenswrapper[4975]: I0122 11:10:49.344168 4975 generic.go:334] "Generic (PLEG): container finished" podID="9f5109b8-3671-4f96-8d9f-625b4c02b1b6" containerID="547eca7e6ada8b6a7eb701d68bf4310f74a96fc8ae1e5f9320629da4974d4ee9" exitCode=0 Jan 22 11:10:49 crc kubenswrapper[4975]: I0122 11:10:49.344376 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" event={"ID":"9f5109b8-3671-4f96-8d9f-625b4c02b1b6","Type":"ContainerDied","Data":"547eca7e6ada8b6a7eb701d68bf4310f74a96fc8ae1e5f9320629da4974d4ee9"} Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.792203 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.916788 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blb4q\" (UniqueName: \"kubernetes.io/projected/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-kube-api-access-blb4q\") pod \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.917296 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-inventory\") pod \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.917420 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-ssh-key-openstack-edpm-ipam\") pod \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\" (UID: \"9f5109b8-3671-4f96-8d9f-625b4c02b1b6\") " Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.926258 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-kube-api-access-blb4q" (OuterVolumeSpecName: "kube-api-access-blb4q") pod "9f5109b8-3671-4f96-8d9f-625b4c02b1b6" (UID: "9f5109b8-3671-4f96-8d9f-625b4c02b1b6"). InnerVolumeSpecName "kube-api-access-blb4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.949490 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f5109b8-3671-4f96-8d9f-625b4c02b1b6" (UID: "9f5109b8-3671-4f96-8d9f-625b4c02b1b6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:50 crc kubenswrapper[4975]: I0122 11:10:50.949953 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-inventory" (OuterVolumeSpecName: "inventory") pod "9f5109b8-3671-4f96-8d9f-625b4c02b1b6" (UID: "9f5109b8-3671-4f96-8d9f-625b4c02b1b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.019772 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blb4q\" (UniqueName: \"kubernetes.io/projected/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-kube-api-access-blb4q\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.020023 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.020033 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5109b8-3671-4f96-8d9f-625b4c02b1b6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.363175 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" event={"ID":"9f5109b8-3671-4f96-8d9f-625b4c02b1b6","Type":"ContainerDied","Data":"4ad17e28f1a5160821b1e4e01c5ece5c2a1ef0ea20c50690cf00b5a5b7d373d2"} Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.363222 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad17e28f1a5160821b1e4e01c5ece5c2a1ef0ea20c50690cf00b5a5b7d373d2" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.363282 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.449355 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v"] Jan 22 11:10:51 crc kubenswrapper[4975]: E0122 11:10:51.449711 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5109b8-3671-4f96-8d9f-625b4c02b1b6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.449733 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5109b8-3671-4f96-8d9f-625b4c02b1b6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.449916 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5109b8-3671-4f96-8d9f-625b4c02b1b6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.450509 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455082 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455099 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455359 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455457 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455491 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455575 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.455681 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.456102 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.465944 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v"] Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630229 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630290 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630346 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630429 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630625 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630672 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgdp\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-kube-api-access-xfgdp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630719 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630770 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.630965 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.631102 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.631160 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.631292 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.631457 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.631503 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733574 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733648 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733670 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733716 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733746 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733770 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733823 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733906 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733927 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.733990 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.734017 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgdp\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-kube-api-access-xfgdp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.734048 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.734076 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.739415 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.739909 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.740164 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.740140 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.741487 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.741648 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.741831 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.742001 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.744607 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.744727 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.744764 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.744772 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.745844 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.761019 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgdp\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-kube-api-access-xfgdp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:51 crc kubenswrapper[4975]: I0122 11:10:51.766594 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:10:52 crc kubenswrapper[4975]: I0122 11:10:52.284928 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v"] Jan 22 11:10:52 crc kubenswrapper[4975]: I0122 11:10:52.373822 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" event={"ID":"2235b954-42a4-4665-a60a-e9f4e49b3e45","Type":"ContainerStarted","Data":"8523f7ac13dbf2843e0dab0c6476cf8f1f6c39c553ede9f41fab0fb5a4596102"} Jan 22 11:10:53 crc kubenswrapper[4975]: I0122 11:10:53.383094 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" event={"ID":"2235b954-42a4-4665-a60a-e9f4e49b3e45","Type":"ContainerStarted","Data":"ab431eb959590bfda675ecec7fc80fa688cbc87fc228728713a2f0c876d303e6"} Jan 22 11:10:53 crc kubenswrapper[4975]: I0122 11:10:53.410752 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" podStartSLOduration=2.017530328 podStartE2EDuration="2.410733803s" podCreationTimestamp="2026-01-22 11:10:51 +0000 UTC" firstStartedPulling="2026-01-22 11:10:52.283491886 +0000 UTC m=+2044.941527996" lastFinishedPulling="2026-01-22 11:10:52.676695321 +0000 UTC m=+2045.334731471" observedRunningTime="2026-01-22 11:10:53.402220491 +0000 UTC m=+2046.060256601" watchObservedRunningTime="2026-01-22 11:10:53.410733803 +0000 UTC m=+2046.068769903" Jan 22 11:10:57 crc kubenswrapper[4975]: I0122 11:10:57.436053 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:10:57 crc kubenswrapper[4975]: I0122 11:10:57.436716 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:11:27 crc kubenswrapper[4975]: I0122 11:11:27.436524 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:11:27 crc kubenswrapper[4975]: I0122 11:11:27.437106 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:11:34 crc kubenswrapper[4975]: I0122 11:11:34.792146 4975 generic.go:334] "Generic (PLEG): container finished" podID="2235b954-42a4-4665-a60a-e9f4e49b3e45" containerID="ab431eb959590bfda675ecec7fc80fa688cbc87fc228728713a2f0c876d303e6" exitCode=0 Jan 22 11:11:34 crc kubenswrapper[4975]: I0122 11:11:34.792225 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" event={"ID":"2235b954-42a4-4665-a60a-e9f4e49b3e45","Type":"ContainerDied","Data":"ab431eb959590bfda675ecec7fc80fa688cbc87fc228728713a2f0c876d303e6"} Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.176910 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.281803 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-libvirt-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.281921 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-nova-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.281966 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.281994 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ovn-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282040 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-repo-setup-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282078 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-neutron-metadata-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282112 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282189 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282234 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-inventory\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282257 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-telemetry-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282371 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ssh-key-openstack-edpm-ipam\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282399 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282427 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfgdp\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-kube-api-access-xfgdp\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.282496 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-bootstrap-combined-ca-bundle\") pod \"2235b954-42a4-4665-a60a-e9f4e49b3e45\" (UID: \"2235b954-42a4-4665-a60a-e9f4e49b3e45\") " Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.289519 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.292218 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.303592 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.305056 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.312735 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.314593 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.314645 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.314758 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.315212 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-kube-api-access-xfgdp" (OuterVolumeSpecName: "kube-api-access-xfgdp") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "kube-api-access-xfgdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.315244 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.315744 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.318589 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.344524 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-inventory" (OuterVolumeSpecName: "inventory") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.368625 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2235b954-42a4-4665-a60a-e9f4e49b3e45" (UID: "2235b954-42a4-4665-a60a-e9f4e49b3e45"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384283 4975 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384322 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384357 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384392 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384412 4975 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384428 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384444 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384461 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfgdp\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-kube-api-access-xfgdp\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384473 4975 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384485 4975 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384497 4975 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384509 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2235b954-42a4-4665-a60a-e9f4e49b3e45-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384523 4975 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.384535 4975 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235b954-42a4-4665-a60a-e9f4e49b3e45-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.809575 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" event={"ID":"2235b954-42a4-4665-a60a-e9f4e49b3e45","Type":"ContainerDied","Data":"8523f7ac13dbf2843e0dab0c6476cf8f1f6c39c553ede9f41fab0fb5a4596102"} Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.809612 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8523f7ac13dbf2843e0dab0c6476cf8f1f6c39c553ede9f41fab0fb5a4596102" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.809660 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.923572 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt"] Jan 22 11:11:36 crc kubenswrapper[4975]: E0122 11:11:36.924508 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2235b954-42a4-4665-a60a-e9f4e49b3e45" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.924539 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2235b954-42a4-4665-a60a-e9f4e49b3e45" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.924863 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2235b954-42a4-4665-a60a-e9f4e49b3e45" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.925826 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.927755 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.928062 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.928423 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.928640 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.928851 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.933635 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt"] Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.997313 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.997414 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.997452 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.997475 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:36 crc kubenswrapper[4975]: I0122 11:11:36.997514 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrpg\" (UniqueName: \"kubernetes.io/projected/0b6be64b-5e04-4797-8617-681e72ebb9d7-kube-api-access-lrrpg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.100029 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.100229 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.100296 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.100371 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.100412 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrpg\" (UniqueName: \"kubernetes.io/projected/0b6be64b-5e04-4797-8617-681e72ebb9d7-kube-api-access-lrrpg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.101527 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.105865 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.106582 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.109169 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.131469 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrpg\" (UniqueName: \"kubernetes.io/projected/0b6be64b-5e04-4797-8617-681e72ebb9d7-kube-api-access-lrrpg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dg2vt\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.251928 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.776839 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt"] Jan 22 11:11:37 crc kubenswrapper[4975]: W0122 11:11:37.785696 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b6be64b_5e04_4797_8617_681e72ebb9d7.slice/crio-abdb2c86c2be36ed664774010924008372a0780ccf3d004101c5760b382beba7 WatchSource:0}: Error finding container abdb2c86c2be36ed664774010924008372a0780ccf3d004101c5760b382beba7: Status 404 returned error can't find the container with id abdb2c86c2be36ed664774010924008372a0780ccf3d004101c5760b382beba7 Jan 22 11:11:37 crc kubenswrapper[4975]: I0122 11:11:37.818446 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" event={"ID":"0b6be64b-5e04-4797-8617-681e72ebb9d7","Type":"ContainerStarted","Data":"abdb2c86c2be36ed664774010924008372a0780ccf3d004101c5760b382beba7"} Jan 22 11:11:38 crc kubenswrapper[4975]: I0122 11:11:38.828983 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" event={"ID":"0b6be64b-5e04-4797-8617-681e72ebb9d7","Type":"ContainerStarted","Data":"6dc0b15499ad94d82e8c6351228c43b9836a9d00b040afe43b49e28001c8f3cc"} Jan 22 11:11:57 crc kubenswrapper[4975]: I0122 11:11:57.436698 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:11:57 crc kubenswrapper[4975]: I0122 11:11:57.437442 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:11:57 crc kubenswrapper[4975]: I0122 11:11:57.437482 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:11:57 crc kubenswrapper[4975]: I0122 11:11:57.437994 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f456fed653d12e00848456fc08269bcad2671aac88dedee7d07f3b2fd5e96016"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:11:57 crc kubenswrapper[4975]: I0122 11:11:57.438043 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://f456fed653d12e00848456fc08269bcad2671aac88dedee7d07f3b2fd5e96016" gracePeriod=600 Jan 22 11:11:58 crc kubenswrapper[4975]: I0122 11:11:58.003466 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="f456fed653d12e00848456fc08269bcad2671aac88dedee7d07f3b2fd5e96016" exitCode=0 Jan 22 11:11:58 crc kubenswrapper[4975]: I0122 11:11:58.003571 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"f456fed653d12e00848456fc08269bcad2671aac88dedee7d07f3b2fd5e96016"} Jan 22 11:11:58 crc kubenswrapper[4975]: I0122 11:11:58.003792 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7"} Jan 22 11:11:58 crc kubenswrapper[4975]: I0122 11:11:58.003821 4975 scope.go:117] "RemoveContainer" containerID="518e61c8f2bd740db6ac3636287779babca1f0242cb96a31b4f0350f015bc398" Jan 22 11:11:58 crc kubenswrapper[4975]: I0122 11:11:58.031662 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" podStartSLOduration=21.383510919 podStartE2EDuration="22.031646584s" podCreationTimestamp="2026-01-22 11:11:36 +0000 UTC" firstStartedPulling="2026-01-22 11:11:37.788365535 +0000 UTC m=+2090.446401645" lastFinishedPulling="2026-01-22 11:11:38.43650116 +0000 UTC m=+2091.094537310" observedRunningTime="2026-01-22 11:11:38.846642615 +0000 UTC m=+2091.504678725" watchObservedRunningTime="2026-01-22 11:11:58.031646584 +0000 UTC m=+2110.689682704" Jan 22 11:12:51 crc kubenswrapper[4975]: I0122 11:12:51.515821 4975 generic.go:334] "Generic (PLEG): container finished" podID="0b6be64b-5e04-4797-8617-681e72ebb9d7" containerID="6dc0b15499ad94d82e8c6351228c43b9836a9d00b040afe43b49e28001c8f3cc" exitCode=0 Jan 22 11:12:51 crc kubenswrapper[4975]: I0122 11:12:51.515965 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" event={"ID":"0b6be64b-5e04-4797-8617-681e72ebb9d7","Type":"ContainerDied","Data":"6dc0b15499ad94d82e8c6351228c43b9836a9d00b040afe43b49e28001c8f3cc"} Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.028953 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.153726 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovn-combined-ca-bundle\") pod \"0b6be64b-5e04-4797-8617-681e72ebb9d7\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.153971 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-inventory\") pod \"0b6be64b-5e04-4797-8617-681e72ebb9d7\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.154038 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ssh-key-openstack-edpm-ipam\") pod \"0b6be64b-5e04-4797-8617-681e72ebb9d7\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.154090 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovncontroller-config-0\") pod \"0b6be64b-5e04-4797-8617-681e72ebb9d7\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.154174 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrrpg\" (UniqueName: \"kubernetes.io/projected/0b6be64b-5e04-4797-8617-681e72ebb9d7-kube-api-access-lrrpg\") pod \"0b6be64b-5e04-4797-8617-681e72ebb9d7\" (UID: \"0b6be64b-5e04-4797-8617-681e72ebb9d7\") " Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.161405 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0b6be64b-5e04-4797-8617-681e72ebb9d7" (UID: "0b6be64b-5e04-4797-8617-681e72ebb9d7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.161546 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6be64b-5e04-4797-8617-681e72ebb9d7-kube-api-access-lrrpg" (OuterVolumeSpecName: "kube-api-access-lrrpg") pod "0b6be64b-5e04-4797-8617-681e72ebb9d7" (UID: "0b6be64b-5e04-4797-8617-681e72ebb9d7"). InnerVolumeSpecName "kube-api-access-lrrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.187919 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-inventory" (OuterVolumeSpecName: "inventory") pod "0b6be64b-5e04-4797-8617-681e72ebb9d7" (UID: "0b6be64b-5e04-4797-8617-681e72ebb9d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.188377 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b6be64b-5e04-4797-8617-681e72ebb9d7" (UID: "0b6be64b-5e04-4797-8617-681e72ebb9d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.203104 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0b6be64b-5e04-4797-8617-681e72ebb9d7" (UID: "0b6be64b-5e04-4797-8617-681e72ebb9d7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.257031 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.257079 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.257101 4975 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.257117 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrrpg\" (UniqueName: \"kubernetes.io/projected/0b6be64b-5e04-4797-8617-681e72ebb9d7-kube-api-access-lrrpg\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.257133 4975 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6be64b-5e04-4797-8617-681e72ebb9d7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.549539 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" event={"ID":"0b6be64b-5e04-4797-8617-681e72ebb9d7","Type":"ContainerDied","Data":"abdb2c86c2be36ed664774010924008372a0780ccf3d004101c5760b382beba7"} Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.550062 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abdb2c86c2be36ed664774010924008372a0780ccf3d004101c5760b382beba7" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.549606 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dg2vt" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.722101 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2"] Jan 22 11:12:53 crc kubenswrapper[4975]: E0122 11:12:53.722730 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6be64b-5e04-4797-8617-681e72ebb9d7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.722763 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6be64b-5e04-4797-8617-681e72ebb9d7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.723066 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6be64b-5e04-4797-8617-681e72ebb9d7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.724000 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.726399 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.726829 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.727003 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.727095 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.727229 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.727482 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.740308 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2"] Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.766644 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.766861 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.766913 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.767049 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzhh\" (UniqueName: \"kubernetes.io/projected/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-kube-api-access-tdzhh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.767181 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.767246 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.868429 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdzhh\" (UniqueName: \"kubernetes.io/projected/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-kube-api-access-tdzhh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.868519 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.868554 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.868607 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.868652 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.868675 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.873785 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.874112 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.880705 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.881058 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.881136 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:53 crc kubenswrapper[4975]: I0122 11:12:53.888827 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdzhh\" (UniqueName: \"kubernetes.io/projected/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-kube-api-access-tdzhh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:54 crc kubenswrapper[4975]: I0122 11:12:54.039937 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:12:54 crc kubenswrapper[4975]: I0122 11:12:54.542775 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2"] Jan 22 11:12:55 crc kubenswrapper[4975]: I0122 11:12:55.568052 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" event={"ID":"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57","Type":"ContainerStarted","Data":"38c3c9888678859dff5a01d221fde380966f8a2f3c98f67419881b51b0be332d"} Jan 22 11:12:55 crc kubenswrapper[4975]: I0122 11:12:55.569871 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" event={"ID":"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57","Type":"ContainerStarted","Data":"80f0d4c555a70a2a9e2d5a73335c28a4b277c1248ccd5651b1115aee0ed41e46"} Jan 22 11:12:55 crc kubenswrapper[4975]: I0122 11:12:55.589451 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" podStartSLOduration=2.075094331 podStartE2EDuration="2.589429122s" podCreationTimestamp="2026-01-22 11:12:53 +0000 UTC" firstStartedPulling="2026-01-22 11:12:54.553348301 +0000 UTC m=+2167.211384411" lastFinishedPulling="2026-01-22 11:12:55.067683072 +0000 UTC m=+2167.725719202" observedRunningTime="2026-01-22 11:12:55.58634017 +0000 UTC m=+2168.244376330" watchObservedRunningTime="2026-01-22 11:12:55.589429122 +0000 UTC m=+2168.247465272" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.704946 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9q7vq"] Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.711085 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.744641 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9q7vq"] Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.864506 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-utilities\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.864577 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-catalog-content\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.864729 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz2k\" (UniqueName: \"kubernetes.io/projected/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-kube-api-access-kfz2k\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.967104 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-utilities\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.967421 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-catalog-content\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.967547 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfz2k\" (UniqueName: \"kubernetes.io/projected/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-kube-api-access-kfz2k\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.967655 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-utilities\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.967859 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-catalog-content\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:35 crc kubenswrapper[4975]: I0122 11:13:35.993572 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfz2k\" (UniqueName: \"kubernetes.io/projected/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-kube-api-access-kfz2k\") pod \"community-operators-9q7vq\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:36 crc kubenswrapper[4975]: I0122 11:13:36.057975 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:36 crc kubenswrapper[4975]: I0122 11:13:36.606852 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9q7vq"] Jan 22 11:13:37 crc kubenswrapper[4975]: I0122 11:13:37.001867 4975 generic.go:334] "Generic (PLEG): container finished" podID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerID="bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef" exitCode=0 Jan 22 11:13:37 crc kubenswrapper[4975]: I0122 11:13:37.001933 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerDied","Data":"bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef"} Jan 22 11:13:37 crc kubenswrapper[4975]: I0122 11:13:37.002219 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerStarted","Data":"ba31b7aa1ad76db31e5ccd95a5a64896bf26ff4e0d8651102901414466e01bd0"} Jan 22 11:13:38 crc kubenswrapper[4975]: I0122 11:13:38.017821 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerStarted","Data":"7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5"} Jan 22 11:13:40 crc kubenswrapper[4975]: I0122 11:13:40.037454 4975 generic.go:334] "Generic (PLEG): container finished" podID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerID="7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5" exitCode=0 Jan 22 11:13:40 crc kubenswrapper[4975]: I0122 11:13:40.037552 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerDied","Data":"7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5"} Jan 22 11:13:41 crc kubenswrapper[4975]: I0122 11:13:41.048208 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerStarted","Data":"a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9"} Jan 22 11:13:41 crc kubenswrapper[4975]: I0122 11:13:41.074234 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9q7vq" podStartSLOduration=2.627424203 podStartE2EDuration="6.074207907s" podCreationTimestamp="2026-01-22 11:13:35 +0000 UTC" firstStartedPulling="2026-01-22 11:13:37.003680107 +0000 UTC m=+2209.661716217" lastFinishedPulling="2026-01-22 11:13:40.450463811 +0000 UTC m=+2213.108499921" observedRunningTime="2026-01-22 11:13:41.065921325 +0000 UTC m=+2213.723957495" watchObservedRunningTime="2026-01-22 11:13:41.074207907 +0000 UTC m=+2213.732244057" Jan 22 11:13:46 crc kubenswrapper[4975]: I0122 11:13:46.058559 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:46 crc kubenswrapper[4975]: I0122 11:13:46.058837 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:46 crc kubenswrapper[4975]: I0122 11:13:46.107624 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:46 crc kubenswrapper[4975]: I0122 11:13:46.164448 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:46 crc kubenswrapper[4975]: I0122 11:13:46.351445 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9q7vq"] Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.107881 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9q7vq" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="registry-server" containerID="cri-o://a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9" gracePeriod=2 Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.590713 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.753950 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-utilities\") pod \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.754111 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfz2k\" (UniqueName: \"kubernetes.io/projected/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-kube-api-access-kfz2k\") pod \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.754255 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-catalog-content\") pod \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\" (UID: \"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08\") " Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.755921 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-utilities" (OuterVolumeSpecName: "utilities") pod "b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" (UID: "b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.778663 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-kube-api-access-kfz2k" (OuterVolumeSpecName: "kube-api-access-kfz2k") pod "b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" (UID: "b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08"). InnerVolumeSpecName "kube-api-access-kfz2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.826885 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" (UID: "b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.857543 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfz2k\" (UniqueName: \"kubernetes.io/projected/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-kube-api-access-kfz2k\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.857582 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:48 crc kubenswrapper[4975]: I0122 11:13:48.857595 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.122691 4975 generic.go:334] "Generic (PLEG): container finished" podID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerID="a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9" exitCode=0 Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.122752 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerDied","Data":"a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9"} Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.122815 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9q7vq" event={"ID":"b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08","Type":"ContainerDied","Data":"ba31b7aa1ad76db31e5ccd95a5a64896bf26ff4e0d8651102901414466e01bd0"} Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.122833 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9q7vq" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.122839 4975 scope.go:117] "RemoveContainer" containerID="a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.159419 4975 scope.go:117] "RemoveContainer" containerID="7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.169049 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9q7vq"] Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.177698 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9q7vq"] Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.200863 4975 scope.go:117] "RemoveContainer" containerID="bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.233677 4975 scope.go:117] "RemoveContainer" containerID="a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9" Jan 22 11:13:49 crc kubenswrapper[4975]: E0122 11:13:49.234154 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9\": container with ID starting with a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9 not found: ID does not exist" containerID="a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.234219 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9"} err="failed to get container status \"a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9\": rpc error: code = NotFound desc = could not find container \"a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9\": container with ID starting with a77507f9855120f73c964608a43825796f750e390cb91dd3eb19081a08c65fb9 not found: ID does not exist" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.234263 4975 scope.go:117] "RemoveContainer" containerID="7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5" Jan 22 11:13:49 crc kubenswrapper[4975]: E0122 11:13:49.234693 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5\": container with ID starting with 7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5 not found: ID does not exist" containerID="7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.234771 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5"} err="failed to get container status \"7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5\": rpc error: code = NotFound desc = could not find container \"7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5\": container with ID starting with 7f81d20cf14aa74d0fb07b1c2de8e17114e7ae6c12acb3ac452182d60f9175c5 not found: ID does not exist" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.234796 4975 scope.go:117] "RemoveContainer" containerID="bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef" Jan 22 11:13:49 crc kubenswrapper[4975]: E0122 11:13:49.235108 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef\": container with ID starting with bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef not found: ID does not exist" containerID="bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.235201 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef"} err="failed to get container status \"bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef\": rpc error: code = NotFound desc = could not find container \"bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef\": container with ID starting with bdee148d1bc8fad62583ff3731fe13b7c71b48fdf5fedbd7f282367232cd31ef not found: ID does not exist" Jan 22 11:13:49 crc kubenswrapper[4975]: I0122 11:13:49.770284 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" path="/var/lib/kubelet/pods/b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08/volumes" Jan 22 11:13:53 crc kubenswrapper[4975]: I0122 11:13:53.164560 4975 generic.go:334] "Generic (PLEG): container finished" podID="ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" containerID="38c3c9888678859dff5a01d221fde380966f8a2f3c98f67419881b51b0be332d" exitCode=0 Jan 22 11:13:53 crc kubenswrapper[4975]: I0122 11:13:53.164639 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" event={"ID":"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57","Type":"ContainerDied","Data":"38c3c9888678859dff5a01d221fde380966f8a2f3c98f67419881b51b0be332d"} Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.596162 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.770103 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdzhh\" (UniqueName: \"kubernetes.io/projected/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-kube-api-access-tdzhh\") pod \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.770378 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-nova-metadata-neutron-config-0\") pod \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.770620 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-metadata-combined-ca-bundle\") pod \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.770677 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-ssh-key-openstack-edpm-ipam\") pod \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.770879 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.770979 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-inventory\") pod \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\" (UID: \"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57\") " Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.776409 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-kube-api-access-tdzhh" (OuterVolumeSpecName: "kube-api-access-tdzhh") pod "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" (UID: "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57"). InnerVolumeSpecName "kube-api-access-tdzhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.780720 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" (UID: "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.802604 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" (UID: "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.807473 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" (UID: "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.816547 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-inventory" (OuterVolumeSpecName: "inventory") pod "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" (UID: "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.833290 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" (UID: "ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.873705 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.873739 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdzhh\" (UniqueName: \"kubernetes.io/projected/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-kube-api-access-tdzhh\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.873752 4975 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.873768 4975 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.873785 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:54 crc kubenswrapper[4975]: I0122 11:13:54.873797 4975 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.187664 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" event={"ID":"ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57","Type":"ContainerDied","Data":"80f0d4c555a70a2a9e2d5a73335c28a4b277c1248ccd5651b1115aee0ed41e46"} Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.187700 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80f0d4c555a70a2a9e2d5a73335c28a4b277c1248ccd5651b1115aee0ed41e46" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.187758 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.293473 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88"] Jan 22 11:13:55 crc kubenswrapper[4975]: E0122 11:13:55.293962 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="registry-server" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.293979 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="registry-server" Jan 22 11:13:55 crc kubenswrapper[4975]: E0122 11:13:55.294013 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.294021 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 11:13:55 crc kubenswrapper[4975]: E0122 11:13:55.294037 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="extract-content" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.294042 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="extract-content" Jan 22 11:13:55 crc kubenswrapper[4975]: E0122 11:13:55.294062 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="extract-utilities" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.294068 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="extract-utilities" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.294266 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8af01fb-5ee4-4ccd-a51f-1e45dbd5ff08" containerName="registry-server" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.294277 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.295113 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.297129 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.297489 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.298042 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.298243 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.303686 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.303923 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88"] Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.485054 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.485425 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.485555 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.485725 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.485828 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5ff\" (UniqueName: \"kubernetes.io/projected/f26c2f80-766b-4f5c-9c50-13fba04daff7-kube-api-access-xq5ff\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.587685 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.587744 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.587822 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.587938 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.587974 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq5ff\" (UniqueName: \"kubernetes.io/projected/f26c2f80-766b-4f5c-9c50-13fba04daff7-kube-api-access-xq5ff\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.591415 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.593605 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.594117 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.601234 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.605471 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq5ff\" (UniqueName: \"kubernetes.io/projected/f26c2f80-766b-4f5c-9c50-13fba04daff7-kube-api-access-xq5ff\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6n88\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:55 crc kubenswrapper[4975]: I0122 11:13:55.613833 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:13:56 crc kubenswrapper[4975]: I0122 11:13:56.168113 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88"] Jan 22 11:13:56 crc kubenswrapper[4975]: I0122 11:13:56.218983 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" event={"ID":"f26c2f80-766b-4f5c-9c50-13fba04daff7","Type":"ContainerStarted","Data":"0eded37a10ff6060fd5ae2d463e321084baa4f2f31f18c2dd6338459fb9cd0e6"} Jan 22 11:13:57 crc kubenswrapper[4975]: I0122 11:13:57.229254 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" event={"ID":"f26c2f80-766b-4f5c-9c50-13fba04daff7","Type":"ContainerStarted","Data":"1fe11a7c178e8eb90aba98b299f82a5ea71868a277a745ed2e6dabf420a4f068"} Jan 22 11:13:57 crc kubenswrapper[4975]: I0122 11:13:57.266391 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" podStartSLOduration=1.7261149599999999 podStartE2EDuration="2.266369317s" podCreationTimestamp="2026-01-22 11:13:55 +0000 UTC" firstStartedPulling="2026-01-22 11:13:56.19097715 +0000 UTC m=+2228.849013260" lastFinishedPulling="2026-01-22 11:13:56.731231507 +0000 UTC m=+2229.389267617" observedRunningTime="2026-01-22 11:13:57.249437343 +0000 UTC m=+2229.907473463" watchObservedRunningTime="2026-01-22 11:13:57.266369317 +0000 UTC m=+2229.924405427" Jan 22 11:13:57 crc kubenswrapper[4975]: I0122 11:13:57.436438 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:13:57 crc kubenswrapper[4975]: I0122 11:13:57.436811 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:14:01 crc kubenswrapper[4975]: I0122 11:14:01.768944 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ds5nr"] Jan 22 11:14:01 crc kubenswrapper[4975]: I0122 11:14:01.772094 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:01 crc kubenswrapper[4975]: I0122 11:14:01.779153 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds5nr"] Jan 22 11:14:01 crc kubenswrapper[4975]: I0122 11:14:01.911286 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-catalog-content\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:01 crc kubenswrapper[4975]: I0122 11:14:01.911914 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-utilities\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:01 crc kubenswrapper[4975]: I0122 11:14:01.912062 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xmh\" (UniqueName: \"kubernetes.io/projected/1693491a-14f5-44d2-9577-94a27836e4eb-kube-api-access-d8xmh\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.014213 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-catalog-content\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.014295 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-utilities\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.014418 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8xmh\" (UniqueName: \"kubernetes.io/projected/1693491a-14f5-44d2-9577-94a27836e4eb-kube-api-access-d8xmh\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.015174 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-catalog-content\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.015410 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-utilities\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.046622 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8xmh\" (UniqueName: \"kubernetes.io/projected/1693491a-14f5-44d2-9577-94a27836e4eb-kube-api-access-d8xmh\") pod \"redhat-marketplace-ds5nr\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.103858 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:02 crc kubenswrapper[4975]: I0122 11:14:02.598509 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds5nr"] Jan 22 11:14:03 crc kubenswrapper[4975]: I0122 11:14:03.287516 4975 generic.go:334] "Generic (PLEG): container finished" podID="1693491a-14f5-44d2-9577-94a27836e4eb" containerID="6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0" exitCode=0 Jan 22 11:14:03 crc kubenswrapper[4975]: I0122 11:14:03.287572 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds5nr" event={"ID":"1693491a-14f5-44d2-9577-94a27836e4eb","Type":"ContainerDied","Data":"6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0"} Jan 22 11:14:03 crc kubenswrapper[4975]: I0122 11:14:03.287607 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds5nr" event={"ID":"1693491a-14f5-44d2-9577-94a27836e4eb","Type":"ContainerStarted","Data":"990163eb58daa0aef4147ef988cefaf0a6d249b4eb342a1bbd3b07c945a99b25"} Jan 22 11:14:04 crc kubenswrapper[4975]: I0122 11:14:04.300199 4975 generic.go:334] "Generic (PLEG): container finished" podID="1693491a-14f5-44d2-9577-94a27836e4eb" containerID="8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23" exitCode=0 Jan 22 11:14:04 crc kubenswrapper[4975]: I0122 11:14:04.300259 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds5nr" event={"ID":"1693491a-14f5-44d2-9577-94a27836e4eb","Type":"ContainerDied","Data":"8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23"} Jan 22 11:14:05 crc kubenswrapper[4975]: I0122 11:14:05.313900 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds5nr" event={"ID":"1693491a-14f5-44d2-9577-94a27836e4eb","Type":"ContainerStarted","Data":"77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255"} Jan 22 11:14:05 crc kubenswrapper[4975]: I0122 11:14:05.342066 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ds5nr" podStartSLOduration=2.929407285 podStartE2EDuration="4.342045845s" podCreationTimestamp="2026-01-22 11:14:01 +0000 UTC" firstStartedPulling="2026-01-22 11:14:03.289859905 +0000 UTC m=+2235.947896015" lastFinishedPulling="2026-01-22 11:14:04.702498425 +0000 UTC m=+2237.360534575" observedRunningTime="2026-01-22 11:14:05.336890726 +0000 UTC m=+2237.994926876" watchObservedRunningTime="2026-01-22 11:14:05.342045845 +0000 UTC m=+2238.000081975" Jan 22 11:14:12 crc kubenswrapper[4975]: I0122 11:14:12.104442 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:12 crc kubenswrapper[4975]: I0122 11:14:12.105117 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:12 crc kubenswrapper[4975]: I0122 11:14:12.185873 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:12 crc kubenswrapper[4975]: I0122 11:14:12.477753 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:12 crc kubenswrapper[4975]: I0122 11:14:12.539836 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds5nr"] Jan 22 11:14:14 crc kubenswrapper[4975]: I0122 11:14:14.414508 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ds5nr" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="registry-server" containerID="cri-o://77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255" gracePeriod=2 Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.413506 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.422749 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-utilities\") pod \"1693491a-14f5-44d2-9577-94a27836e4eb\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.422830 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8xmh\" (UniqueName: \"kubernetes.io/projected/1693491a-14f5-44d2-9577-94a27836e4eb-kube-api-access-d8xmh\") pod \"1693491a-14f5-44d2-9577-94a27836e4eb\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.423001 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-catalog-content\") pod \"1693491a-14f5-44d2-9577-94a27836e4eb\" (UID: \"1693491a-14f5-44d2-9577-94a27836e4eb\") " Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.424459 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-utilities" (OuterVolumeSpecName: "utilities") pod "1693491a-14f5-44d2-9577-94a27836e4eb" (UID: "1693491a-14f5-44d2-9577-94a27836e4eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.426620 4975 generic.go:334] "Generic (PLEG): container finished" podID="1693491a-14f5-44d2-9577-94a27836e4eb" containerID="77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255" exitCode=0 Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.426667 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds5nr" event={"ID":"1693491a-14f5-44d2-9577-94a27836e4eb","Type":"ContainerDied","Data":"77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255"} Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.426719 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ds5nr" event={"ID":"1693491a-14f5-44d2-9577-94a27836e4eb","Type":"ContainerDied","Data":"990163eb58daa0aef4147ef988cefaf0a6d249b4eb342a1bbd3b07c945a99b25"} Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.426727 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ds5nr" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.426750 4975 scope.go:117] "RemoveContainer" containerID="77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.448095 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1693491a-14f5-44d2-9577-94a27836e4eb" (UID: "1693491a-14f5-44d2-9577-94a27836e4eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.474993 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1693491a-14f5-44d2-9577-94a27836e4eb-kube-api-access-d8xmh" (OuterVolumeSpecName: "kube-api-access-d8xmh") pod "1693491a-14f5-44d2-9577-94a27836e4eb" (UID: "1693491a-14f5-44d2-9577-94a27836e4eb"). InnerVolumeSpecName "kube-api-access-d8xmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.494154 4975 scope.go:117] "RemoveContainer" containerID="8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.522460 4975 scope.go:117] "RemoveContainer" containerID="6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.524715 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.524744 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1693491a-14f5-44d2-9577-94a27836e4eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.524756 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8xmh\" (UniqueName: \"kubernetes.io/projected/1693491a-14f5-44d2-9577-94a27836e4eb-kube-api-access-d8xmh\") on node \"crc\" DevicePath \"\"" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.581030 4975 scope.go:117] "RemoveContainer" containerID="77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255" Jan 22 11:14:15 crc kubenswrapper[4975]: E0122 11:14:15.581613 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255\": container with ID starting with 77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255 not found: ID does not exist" containerID="77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.581670 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255"} err="failed to get container status \"77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255\": rpc error: code = NotFound desc = could not find container \"77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255\": container with ID starting with 77f3f6ba657186823c9746620b69e9c99e59cd1d6abfb368d7ff62b08e78d255 not found: ID does not exist" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.581703 4975 scope.go:117] "RemoveContainer" containerID="8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23" Jan 22 11:14:15 crc kubenswrapper[4975]: E0122 11:14:15.582245 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23\": container with ID starting with 8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23 not found: ID does not exist" containerID="8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.582278 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23"} err="failed to get container status \"8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23\": rpc error: code = NotFound desc = could not find container \"8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23\": container with ID starting with 8947e570e3b9dfa24552d55e0231e9e2b3c0f7801453798c584a6a0a070b7a23 not found: ID does not exist" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.582300 4975 scope.go:117] "RemoveContainer" containerID="6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0" Jan 22 11:14:15 crc kubenswrapper[4975]: E0122 11:14:15.582600 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0\": container with ID starting with 6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0 not found: ID does not exist" containerID="6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.582614 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0"} err="failed to get container status \"6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0\": rpc error: code = NotFound desc = could not find container \"6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0\": container with ID starting with 6ea749ac9cb829f0823a5be203657b7e240d041a4709503e20b141777ddce1e0 not found: ID does not exist" Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.769685 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds5nr"] Jan 22 11:14:15 crc kubenswrapper[4975]: I0122 11:14:15.779861 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ds5nr"] Jan 22 11:14:17 crc kubenswrapper[4975]: I0122 11:14:17.775755 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" path="/var/lib/kubelet/pods/1693491a-14f5-44d2-9577-94a27836e4eb/volumes" Jan 22 11:14:27 crc kubenswrapper[4975]: I0122 11:14:27.436043 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:14:27 crc kubenswrapper[4975]: I0122 11:14:27.436602 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.436991 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.437568 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.437609 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.438369 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.438428 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" gracePeriod=600 Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.875285 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" exitCode=0 Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.875408 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7"} Jan 22 11:14:57 crc kubenswrapper[4975]: I0122 11:14:57.875492 4975 scope.go:117] "RemoveContainer" containerID="f456fed653d12e00848456fc08269bcad2671aac88dedee7d07f3b2fd5e96016" Jan 22 11:14:58 crc kubenswrapper[4975]: E0122 11:14:58.070483 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:14:58 crc kubenswrapper[4975]: I0122 11:14:58.885261 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:14:58 crc kubenswrapper[4975]: E0122 11:14:58.885559 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.155734 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt"] Jan 22 11:15:00 crc kubenswrapper[4975]: E0122 11:15:00.156364 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="extract-content" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.156377 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="extract-content" Jan 22 11:15:00 crc kubenswrapper[4975]: E0122 11:15:00.156389 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="registry-server" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.156396 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="registry-server" Jan 22 11:15:00 crc kubenswrapper[4975]: E0122 11:15:00.156409 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="extract-utilities" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.156415 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="extract-utilities" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.156605 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="1693491a-14f5-44d2-9577-94a27836e4eb" containerName="registry-server" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.157184 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.159533 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.159970 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.175744 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt"] Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.238827 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7f148e-cecc-4c7f-878a-b543f3cdff20-secret-volume\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.239026 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7f148e-cecc-4c7f-878a-b543f3cdff20-config-volume\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.239495 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmhn\" (UniqueName: \"kubernetes.io/projected/9d7f148e-cecc-4c7f-878a-b543f3cdff20-kube-api-access-4nmhn\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.341862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmhn\" (UniqueName: \"kubernetes.io/projected/9d7f148e-cecc-4c7f-878a-b543f3cdff20-kube-api-access-4nmhn\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.342451 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7f148e-cecc-4c7f-878a-b543f3cdff20-secret-volume\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.342704 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7f148e-cecc-4c7f-878a-b543f3cdff20-config-volume\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.344215 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7f148e-cecc-4c7f-878a-b543f3cdff20-config-volume\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.348376 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7f148e-cecc-4c7f-878a-b543f3cdff20-secret-volume\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.361317 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmhn\" (UniqueName: \"kubernetes.io/projected/9d7f148e-cecc-4c7f-878a-b543f3cdff20-kube-api-access-4nmhn\") pod \"collect-profiles-29484675-4qqwt\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.490046 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:00 crc kubenswrapper[4975]: I0122 11:15:00.937202 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt"] Jan 22 11:15:01 crc kubenswrapper[4975]: I0122 11:15:01.920318 4975 generic.go:334] "Generic (PLEG): container finished" podID="9d7f148e-cecc-4c7f-878a-b543f3cdff20" containerID="520b81eb748eb6b54d023dbe1ddc1a62cab07a8efbad2f5ee1e9b22bcc0da4f5" exitCode=0 Jan 22 11:15:01 crc kubenswrapper[4975]: I0122 11:15:01.920489 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" event={"ID":"9d7f148e-cecc-4c7f-878a-b543f3cdff20","Type":"ContainerDied","Data":"520b81eb748eb6b54d023dbe1ddc1a62cab07a8efbad2f5ee1e9b22bcc0da4f5"} Jan 22 11:15:01 crc kubenswrapper[4975]: I0122 11:15:01.920846 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" event={"ID":"9d7f148e-cecc-4c7f-878a-b543f3cdff20","Type":"ContainerStarted","Data":"da827a7f4eb882ddd444e71415a096c802c7e292004ce7ecfb1c915fcef25387"} Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.332695 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.415407 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7f148e-cecc-4c7f-878a-b543f3cdff20-config-volume\") pod \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.415533 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7f148e-cecc-4c7f-878a-b543f3cdff20-secret-volume\") pod \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.415731 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nmhn\" (UniqueName: \"kubernetes.io/projected/9d7f148e-cecc-4c7f-878a-b543f3cdff20-kube-api-access-4nmhn\") pod \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\" (UID: \"9d7f148e-cecc-4c7f-878a-b543f3cdff20\") " Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.416458 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7f148e-cecc-4c7f-878a-b543f3cdff20-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d7f148e-cecc-4c7f-878a-b543f3cdff20" (UID: "9d7f148e-cecc-4c7f-878a-b543f3cdff20"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.420543 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d7f148e-cecc-4c7f-878a-b543f3cdff20-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d7f148e-cecc-4c7f-878a-b543f3cdff20" (UID: "9d7f148e-cecc-4c7f-878a-b543f3cdff20"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.422592 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7f148e-cecc-4c7f-878a-b543f3cdff20-kube-api-access-4nmhn" (OuterVolumeSpecName: "kube-api-access-4nmhn") pod "9d7f148e-cecc-4c7f-878a-b543f3cdff20" (UID: "9d7f148e-cecc-4c7f-878a-b543f3cdff20"). InnerVolumeSpecName "kube-api-access-4nmhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.518754 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7f148e-cecc-4c7f-878a-b543f3cdff20-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.518811 4975 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7f148e-cecc-4c7f-878a-b543f3cdff20-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.518832 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nmhn\" (UniqueName: \"kubernetes.io/projected/9d7f148e-cecc-4c7f-878a-b543f3cdff20-kube-api-access-4nmhn\") on node \"crc\" DevicePath \"\"" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.941423 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" event={"ID":"9d7f148e-cecc-4c7f-878a-b543f3cdff20","Type":"ContainerDied","Data":"da827a7f4eb882ddd444e71415a096c802c7e292004ce7ecfb1c915fcef25387"} Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.941463 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da827a7f4eb882ddd444e71415a096c802c7e292004ce7ecfb1c915fcef25387" Jan 22 11:15:03 crc kubenswrapper[4975]: I0122 11:15:03.941511 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-4qqwt" Jan 22 11:15:04 crc kubenswrapper[4975]: I0122 11:15:04.413732 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z"] Jan 22 11:15:04 crc kubenswrapper[4975]: I0122 11:15:04.420999 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-xqs7z"] Jan 22 11:15:05 crc kubenswrapper[4975]: I0122 11:15:05.769933 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81fdabfd-7410-4ca7-855b-c1764b58b808" path="/var/lib/kubelet/pods/81fdabfd-7410-4ca7-855b-c1764b58b808/volumes" Jan 22 11:15:09 crc kubenswrapper[4975]: I0122 11:15:09.755402 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:15:09 crc kubenswrapper[4975]: E0122 11:15:09.756253 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:15:21 crc kubenswrapper[4975]: I0122 11:15:21.754618 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:15:21 crc kubenswrapper[4975]: E0122 11:15:21.755424 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:15:33 crc kubenswrapper[4975]: I0122 11:15:33.431825 4975 scope.go:117] "RemoveContainer" containerID="3e6dbcc74f3cd802ae759a53a0805606d483b01b1e8bdea3ec3f7529c9411dbe" Jan 22 11:15:36 crc kubenswrapper[4975]: I0122 11:15:36.756298 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:15:36 crc kubenswrapper[4975]: E0122 11:15:36.757284 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:15:50 crc kubenswrapper[4975]: I0122 11:15:50.755006 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:15:50 crc kubenswrapper[4975]: E0122 11:15:50.755662 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:16:01 crc kubenswrapper[4975]: I0122 11:16:01.755417 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:16:01 crc kubenswrapper[4975]: E0122 11:16:01.756106 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:16:16 crc kubenswrapper[4975]: I0122 11:16:16.754756 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:16:16 crc kubenswrapper[4975]: E0122 11:16:16.755587 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:16:29 crc kubenswrapper[4975]: I0122 11:16:29.755053 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:16:29 crc kubenswrapper[4975]: E0122 11:16:29.755907 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:16:41 crc kubenswrapper[4975]: I0122 11:16:41.755056 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:16:41 crc kubenswrapper[4975]: E0122 11:16:41.755701 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:16:52 crc kubenswrapper[4975]: I0122 11:16:52.757214 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:16:52 crc kubenswrapper[4975]: E0122 11:16:52.758248 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:17:04 crc kubenswrapper[4975]: I0122 11:17:04.755282 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:17:04 crc kubenswrapper[4975]: E0122 11:17:04.755898 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:17:15 crc kubenswrapper[4975]: I0122 11:17:15.755430 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:17:15 crc kubenswrapper[4975]: E0122 11:17:15.756292 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:17:28 crc kubenswrapper[4975]: I0122 11:17:28.756454 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:17:28 crc kubenswrapper[4975]: E0122 11:17:28.757310 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:17:43 crc kubenswrapper[4975]: I0122 11:17:43.756867 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:17:43 crc kubenswrapper[4975]: E0122 11:17:43.757949 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:17:54 crc kubenswrapper[4975]: I0122 11:17:54.756534 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:17:54 crc kubenswrapper[4975]: E0122 11:17:54.757872 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:18:09 crc kubenswrapper[4975]: I0122 11:18:09.755798 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:18:09 crc kubenswrapper[4975]: E0122 11:18:09.757081 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:18:22 crc kubenswrapper[4975]: I0122 11:18:22.755794 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:18:22 crc kubenswrapper[4975]: E0122 11:18:22.757593 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.073420 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tm7wb"] Jan 22 11:18:30 crc kubenswrapper[4975]: E0122 11:18:30.074971 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7f148e-cecc-4c7f-878a-b543f3cdff20" containerName="collect-profiles" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.075006 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7f148e-cecc-4c7f-878a-b543f3cdff20" containerName="collect-profiles" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.075571 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d7f148e-cecc-4c7f-878a-b543f3cdff20" containerName="collect-profiles" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.079193 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.101306 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm7wb"] Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.193760 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-utilities\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.193868 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-catalog-content\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.194174 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcv6\" (UniqueName: \"kubernetes.io/projected/fef95bfe-411c-4957-91df-8a85f2258c38-kube-api-access-xgcv6\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.250381 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kntfk"] Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.252691 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.257936 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kntfk"] Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.296655 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-utilities\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.296722 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-catalog-content\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.296796 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcv6\" (UniqueName: \"kubernetes.io/projected/fef95bfe-411c-4957-91df-8a85f2258c38-kube-api-access-xgcv6\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.297179 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-utilities\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.297262 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-catalog-content\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.317171 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcv6\" (UniqueName: \"kubernetes.io/projected/fef95bfe-411c-4957-91df-8a85f2258c38-kube-api-access-xgcv6\") pod \"redhat-operators-tm7wb\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.398964 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-catalog-content\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.399060 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wlx\" (UniqueName: \"kubernetes.io/projected/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-kube-api-access-87wlx\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.399101 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-utilities\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.421963 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.501023 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87wlx\" (UniqueName: \"kubernetes.io/projected/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-kube-api-access-87wlx\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.501087 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-utilities\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.501203 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-catalog-content\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.501689 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-utilities\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.501701 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-catalog-content\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.522277 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87wlx\" (UniqueName: \"kubernetes.io/projected/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-kube-api-access-87wlx\") pod \"certified-operators-kntfk\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.574307 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:30 crc kubenswrapper[4975]: I0122 11:18:30.920319 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm7wb"] Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.143807 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kntfk"] Jan 22 11:18:31 crc kubenswrapper[4975]: W0122 11:18:31.144281 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod901cd0c4_31e8_45a7_97d0_35aa1f1305a9.slice/crio-7504bbff23e1e3eabbce3e51c03721156c16ff5d8f3de8593423e716925e9c35 WatchSource:0}: Error finding container 7504bbff23e1e3eabbce3e51c03721156c16ff5d8f3de8593423e716925e9c35: Status 404 returned error can't find the container with id 7504bbff23e1e3eabbce3e51c03721156c16ff5d8f3de8593423e716925e9c35 Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.823497 4975 generic.go:334] "Generic (PLEG): container finished" podID="fef95bfe-411c-4957-91df-8a85f2258c38" containerID="da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40" exitCode=0 Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.823567 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerDied","Data":"da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40"} Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.823593 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerStarted","Data":"76a16b9a378e3b00c71844dd345519d3c91e1b4948e258cb62d18d72f0ea00b4"} Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.825377 4975 generic.go:334] "Generic (PLEG): container finished" podID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerID="a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2" exitCode=0 Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.825412 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerDied","Data":"a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2"} Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.825432 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerStarted","Data":"7504bbff23e1e3eabbce3e51c03721156c16ff5d8f3de8593423e716925e9c35"} Jan 22 11:18:31 crc kubenswrapper[4975]: I0122 11:18:31.826661 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:18:32 crc kubenswrapper[4975]: I0122 11:18:32.835578 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerStarted","Data":"0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12"} Jan 22 11:18:32 crc kubenswrapper[4975]: I0122 11:18:32.837176 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerStarted","Data":"171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044"} Jan 22 11:18:33 crc kubenswrapper[4975]: I0122 11:18:33.852302 4975 generic.go:334] "Generic (PLEG): container finished" podID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerID="171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044" exitCode=0 Jan 22 11:18:33 crc kubenswrapper[4975]: I0122 11:18:33.852378 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerDied","Data":"171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044"} Jan 22 11:18:34 crc kubenswrapper[4975]: I0122 11:18:34.864117 4975 generic.go:334] "Generic (PLEG): container finished" podID="fef95bfe-411c-4957-91df-8a85f2258c38" containerID="0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12" exitCode=0 Jan 22 11:18:34 crc kubenswrapper[4975]: I0122 11:18:34.864209 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerDied","Data":"0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12"} Jan 22 11:18:34 crc kubenswrapper[4975]: I0122 11:18:34.871915 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerStarted","Data":"eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667"} Jan 22 11:18:34 crc kubenswrapper[4975]: I0122 11:18:34.906691 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kntfk" podStartSLOduration=2.438678426 podStartE2EDuration="4.906676463s" podCreationTimestamp="2026-01-22 11:18:30 +0000 UTC" firstStartedPulling="2026-01-22 11:18:31.82741811 +0000 UTC m=+2504.485454220" lastFinishedPulling="2026-01-22 11:18:34.295416117 +0000 UTC m=+2506.953452257" observedRunningTime="2026-01-22 11:18:34.90473763 +0000 UTC m=+2507.562773750" watchObservedRunningTime="2026-01-22 11:18:34.906676463 +0000 UTC m=+2507.564712573" Jan 22 11:18:36 crc kubenswrapper[4975]: I0122 11:18:36.891571 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerStarted","Data":"861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c"} Jan 22 11:18:36 crc kubenswrapper[4975]: I0122 11:18:36.909937 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tm7wb" podStartSLOduration=3.083477312 podStartE2EDuration="6.909922334s" podCreationTimestamp="2026-01-22 11:18:30 +0000 UTC" firstStartedPulling="2026-01-22 11:18:31.826148256 +0000 UTC m=+2504.484184376" lastFinishedPulling="2026-01-22 11:18:35.652593278 +0000 UTC m=+2508.310629398" observedRunningTime="2026-01-22 11:18:36.908776733 +0000 UTC m=+2509.566812883" watchObservedRunningTime="2026-01-22 11:18:36.909922334 +0000 UTC m=+2509.567958434" Jan 22 11:18:37 crc kubenswrapper[4975]: I0122 11:18:37.764592 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:18:37 crc kubenswrapper[4975]: E0122 11:18:37.764994 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:18:40 crc kubenswrapper[4975]: I0122 11:18:40.423059 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:40 crc kubenswrapper[4975]: I0122 11:18:40.423114 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:40 crc kubenswrapper[4975]: I0122 11:18:40.575285 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:40 crc kubenswrapper[4975]: I0122 11:18:40.575624 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:40 crc kubenswrapper[4975]: I0122 11:18:40.626238 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:41 crc kubenswrapper[4975]: I0122 11:18:41.024831 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:41 crc kubenswrapper[4975]: I0122 11:18:41.448166 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kntfk"] Jan 22 11:18:41 crc kubenswrapper[4975]: I0122 11:18:41.494493 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tm7wb" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="registry-server" probeResult="failure" output=< Jan 22 11:18:41 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 11:18:41 crc kubenswrapper[4975]: > Jan 22 11:18:41 crc kubenswrapper[4975]: I0122 11:18:41.955273 4975 generic.go:334] "Generic (PLEG): container finished" podID="f26c2f80-766b-4f5c-9c50-13fba04daff7" containerID="1fe11a7c178e8eb90aba98b299f82a5ea71868a277a745ed2e6dabf420a4f068" exitCode=0 Jan 22 11:18:41 crc kubenswrapper[4975]: I0122 11:18:41.955394 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" event={"ID":"f26c2f80-766b-4f5c-9c50-13fba04daff7","Type":"ContainerDied","Data":"1fe11a7c178e8eb90aba98b299f82a5ea71868a277a745ed2e6dabf420a4f068"} Jan 22 11:18:42 crc kubenswrapper[4975]: I0122 11:18:42.961599 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kntfk" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="registry-server" containerID="cri-o://eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667" gracePeriod=2 Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.437969 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.444187 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.597735 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87wlx\" (UniqueName: \"kubernetes.io/projected/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-kube-api-access-87wlx\") pod \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.598151 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-utilities\") pod \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.598322 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-ssh-key-openstack-edpm-ipam\") pod \"f26c2f80-766b-4f5c-9c50-13fba04daff7\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.598499 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-catalog-content\") pod \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\" (UID: \"901cd0c4-31e8-45a7-97d0-35aa1f1305a9\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.598745 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-secret-0\") pod \"f26c2f80-766b-4f5c-9c50-13fba04daff7\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.599325 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq5ff\" (UniqueName: \"kubernetes.io/projected/f26c2f80-766b-4f5c-9c50-13fba04daff7-kube-api-access-xq5ff\") pod \"f26c2f80-766b-4f5c-9c50-13fba04daff7\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.599067 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-utilities" (OuterVolumeSpecName: "utilities") pod "901cd0c4-31e8-45a7-97d0-35aa1f1305a9" (UID: "901cd0c4-31e8-45a7-97d0-35aa1f1305a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.599607 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-combined-ca-bundle\") pod \"f26c2f80-766b-4f5c-9c50-13fba04daff7\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.599768 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-inventory\") pod \"f26c2f80-766b-4f5c-9c50-13fba04daff7\" (UID: \"f26c2f80-766b-4f5c-9c50-13fba04daff7\") " Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.600575 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.604096 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f26c2f80-766b-4f5c-9c50-13fba04daff7" (UID: "f26c2f80-766b-4f5c-9c50-13fba04daff7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.604911 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26c2f80-766b-4f5c-9c50-13fba04daff7-kube-api-access-xq5ff" (OuterVolumeSpecName: "kube-api-access-xq5ff") pod "f26c2f80-766b-4f5c-9c50-13fba04daff7" (UID: "f26c2f80-766b-4f5c-9c50-13fba04daff7"). InnerVolumeSpecName "kube-api-access-xq5ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.605750 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-kube-api-access-87wlx" (OuterVolumeSpecName: "kube-api-access-87wlx") pod "901cd0c4-31e8-45a7-97d0-35aa1f1305a9" (UID: "901cd0c4-31e8-45a7-97d0-35aa1f1305a9"). InnerVolumeSpecName "kube-api-access-87wlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.625422 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "f26c2f80-766b-4f5c-9c50-13fba04daff7" (UID: "f26c2f80-766b-4f5c-9c50-13fba04daff7"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.631613 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-inventory" (OuterVolumeSpecName: "inventory") pod "f26c2f80-766b-4f5c-9c50-13fba04daff7" (UID: "f26c2f80-766b-4f5c-9c50-13fba04daff7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.652913 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f26c2f80-766b-4f5c-9c50-13fba04daff7" (UID: "f26c2f80-766b-4f5c-9c50-13fba04daff7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.658091 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "901cd0c4-31e8-45a7-97d0-35aa1f1305a9" (UID: "901cd0c4-31e8-45a7-97d0-35aa1f1305a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702441 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq5ff\" (UniqueName: \"kubernetes.io/projected/f26c2f80-766b-4f5c-9c50-13fba04daff7-kube-api-access-xq5ff\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702526 4975 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702544 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702557 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87wlx\" (UniqueName: \"kubernetes.io/projected/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-kube-api-access-87wlx\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702570 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702632 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901cd0c4-31e8-45a7-97d0-35aa1f1305a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.702646 4975 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f26c2f80-766b-4f5c-9c50-13fba04daff7-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.983742 4975 generic.go:334] "Generic (PLEG): container finished" podID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerID="eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667" exitCode=0 Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.983867 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerDied","Data":"eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667"} Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.983908 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kntfk" event={"ID":"901cd0c4-31e8-45a7-97d0-35aa1f1305a9","Type":"ContainerDied","Data":"7504bbff23e1e3eabbce3e51c03721156c16ff5d8f3de8593423e716925e9c35"} Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.983935 4975 scope.go:117] "RemoveContainer" containerID="eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.986228 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kntfk" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.987511 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" event={"ID":"f26c2f80-766b-4f5c-9c50-13fba04daff7","Type":"ContainerDied","Data":"0eded37a10ff6060fd5ae2d463e321084baa4f2f31f18c2dd6338459fb9cd0e6"} Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.987570 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eded37a10ff6060fd5ae2d463e321084baa4f2f31f18c2dd6338459fb9cd0e6" Jan 22 11:18:43 crc kubenswrapper[4975]: I0122 11:18:43.987667 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6n88" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.036379 4975 scope.go:117] "RemoveContainer" containerID="171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.038877 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kntfk"] Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.056326 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kntfk"] Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.087753 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl"] Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.088290 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="extract-content" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.088316 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="extract-content" Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.088381 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="extract-utilities" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.088393 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="extract-utilities" Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.088411 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="registry-server" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.088419 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="registry-server" Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.088445 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26c2f80-766b-4f5c-9c50-13fba04daff7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.088456 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26c2f80-766b-4f5c-9c50-13fba04daff7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.088665 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" containerName="registry-server" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.088710 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26c2f80-766b-4f5c-9c50-13fba04daff7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.089282 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.090968 4975 scope.go:117] "RemoveContainer" containerID="a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.092030 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.095800 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.096144 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.096267 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.096313 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.096457 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.096597 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.104859 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl"] Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.151478 4975 scope.go:117] "RemoveContainer" containerID="eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667" Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.152047 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667\": container with ID starting with eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667 not found: ID does not exist" containerID="eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.152084 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667"} err="failed to get container status \"eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667\": rpc error: code = NotFound desc = could not find container \"eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667\": container with ID starting with eca664ab3e17532feec886b13b3bf217f1612904ef47ad597210bc8e9ffb7667 not found: ID does not exist" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.152110 4975 scope.go:117] "RemoveContainer" containerID="171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044" Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.152492 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044\": container with ID starting with 171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044 not found: ID does not exist" containerID="171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.152518 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044"} err="failed to get container status \"171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044\": rpc error: code = NotFound desc = could not find container \"171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044\": container with ID starting with 171b905a72bffa770c4e608815394eefc2a408bf1eb7ddfed9f8dabd246ad044 not found: ID does not exist" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.152534 4975 scope.go:117] "RemoveContainer" containerID="a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2" Jan 22 11:18:44 crc kubenswrapper[4975]: E0122 11:18:44.152796 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2\": container with ID starting with a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2 not found: ID does not exist" containerID="a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.152824 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2"} err="failed to get container status \"a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2\": rpc error: code = NotFound desc = could not find container \"a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2\": container with ID starting with a132807c9211d868a3875de451536dfa962fc39859162c3c72fd80805572f7c2 not found: ID does not exist" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.222350 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6tl\" (UniqueName: \"kubernetes.io/projected/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-kube-api-access-jm6tl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.222804 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.222896 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.223267 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.223319 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.223440 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.223576 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.223769 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.223911 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325616 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325667 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325693 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325727 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm6tl\" (UniqueName: \"kubernetes.io/projected/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-kube-api-access-jm6tl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325776 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325798 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325846 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325869 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.325893 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.326960 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.331073 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.331528 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.332028 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.332208 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.332644 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.336863 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.338885 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.343855 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm6tl\" (UniqueName: \"kubernetes.io/projected/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-kube-api-access-jm6tl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-c25vl\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:44 crc kubenswrapper[4975]: I0122 11:18:44.496735 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:18:45 crc kubenswrapper[4975]: W0122 11:18:45.080737 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8154d48_3ee4_45f3_bf77_45befe6cdc3a.slice/crio-98de8dea7082e8f95c6ff217a015353dec858c832481b43995aa2758528d5a79 WatchSource:0}: Error finding container 98de8dea7082e8f95c6ff217a015353dec858c832481b43995aa2758528d5a79: Status 404 returned error can't find the container with id 98de8dea7082e8f95c6ff217a015353dec858c832481b43995aa2758528d5a79 Jan 22 11:18:45 crc kubenswrapper[4975]: I0122 11:18:45.084925 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl"] Jan 22 11:18:45 crc kubenswrapper[4975]: I0122 11:18:45.767136 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901cd0c4-31e8-45a7-97d0-35aa1f1305a9" path="/var/lib/kubelet/pods/901cd0c4-31e8-45a7-97d0-35aa1f1305a9/volumes" Jan 22 11:18:46 crc kubenswrapper[4975]: I0122 11:18:46.016116 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" event={"ID":"f8154d48-3ee4-45f3-bf77-45befe6cdc3a","Type":"ContainerStarted","Data":"35c3e2f537ff4962af8428765c5b61fe8e1b31f1b80ccdae44a9fb2c2b18e395"} Jan 22 11:18:46 crc kubenswrapper[4975]: I0122 11:18:46.016503 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" event={"ID":"f8154d48-3ee4-45f3-bf77-45befe6cdc3a","Type":"ContainerStarted","Data":"98de8dea7082e8f95c6ff217a015353dec858c832481b43995aa2758528d5a79"} Jan 22 11:18:46 crc kubenswrapper[4975]: I0122 11:18:46.045163 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" podStartSLOduration=1.643217449 podStartE2EDuration="2.045138392s" podCreationTimestamp="2026-01-22 11:18:44 +0000 UTC" firstStartedPulling="2026-01-22 11:18:45.092996997 +0000 UTC m=+2517.751033147" lastFinishedPulling="2026-01-22 11:18:45.49491798 +0000 UTC m=+2518.152954090" observedRunningTime="2026-01-22 11:18:46.039435077 +0000 UTC m=+2518.697471297" watchObservedRunningTime="2026-01-22 11:18:46.045138392 +0000 UTC m=+2518.703174542" Jan 22 11:18:49 crc kubenswrapper[4975]: I0122 11:18:49.755603 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:18:49 crc kubenswrapper[4975]: E0122 11:18:49.756646 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:18:50 crc kubenswrapper[4975]: I0122 11:18:50.510105 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:50 crc kubenswrapper[4975]: I0122 11:18:50.583424 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:50 crc kubenswrapper[4975]: I0122 11:18:50.774232 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm7wb"] Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.089740 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tm7wb" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="registry-server" containerID="cri-o://861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c" gracePeriod=2 Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.576755 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.606490 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-catalog-content\") pod \"fef95bfe-411c-4957-91df-8a85f2258c38\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.606615 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-utilities\") pod \"fef95bfe-411c-4957-91df-8a85f2258c38\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.606697 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgcv6\" (UniqueName: \"kubernetes.io/projected/fef95bfe-411c-4957-91df-8a85f2258c38-kube-api-access-xgcv6\") pod \"fef95bfe-411c-4957-91df-8a85f2258c38\" (UID: \"fef95bfe-411c-4957-91df-8a85f2258c38\") " Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.607539 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-utilities" (OuterVolumeSpecName: "utilities") pod "fef95bfe-411c-4957-91df-8a85f2258c38" (UID: "fef95bfe-411c-4957-91df-8a85f2258c38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.615267 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef95bfe-411c-4957-91df-8a85f2258c38-kube-api-access-xgcv6" (OuterVolumeSpecName: "kube-api-access-xgcv6") pod "fef95bfe-411c-4957-91df-8a85f2258c38" (UID: "fef95bfe-411c-4957-91df-8a85f2258c38"). InnerVolumeSpecName "kube-api-access-xgcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.708790 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgcv6\" (UniqueName: \"kubernetes.io/projected/fef95bfe-411c-4957-91df-8a85f2258c38-kube-api-access-xgcv6\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.708827 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.739628 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fef95bfe-411c-4957-91df-8a85f2258c38" (UID: "fef95bfe-411c-4957-91df-8a85f2258c38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:18:52 crc kubenswrapper[4975]: I0122 11:18:52.810593 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef95bfe-411c-4957-91df-8a85f2258c38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.098687 4975 generic.go:334] "Generic (PLEG): container finished" podID="fef95bfe-411c-4957-91df-8a85f2258c38" containerID="861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c" exitCode=0 Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.098725 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerDied","Data":"861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c"} Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.099008 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7wb" event={"ID":"fef95bfe-411c-4957-91df-8a85f2258c38","Type":"ContainerDied","Data":"76a16b9a378e3b00c71844dd345519d3c91e1b4948e258cb62d18d72f0ea00b4"} Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.098789 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7wb" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.099029 4975 scope.go:117] "RemoveContainer" containerID="861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.127641 4975 scope.go:117] "RemoveContainer" containerID="0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.139785 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm7wb"] Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.148981 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tm7wb"] Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.158525 4975 scope.go:117] "RemoveContainer" containerID="da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.204520 4975 scope.go:117] "RemoveContainer" containerID="861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c" Jan 22 11:18:53 crc kubenswrapper[4975]: E0122 11:18:53.205152 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c\": container with ID starting with 861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c not found: ID does not exist" containerID="861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.205212 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c"} err="failed to get container status \"861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c\": rpc error: code = NotFound desc = could not find container \"861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c\": container with ID starting with 861b4156df188e278ff0f930cd3ae06214f62dd7270291875aef2baac5c2bb0c not found: ID does not exist" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.205253 4975 scope.go:117] "RemoveContainer" containerID="0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12" Jan 22 11:18:53 crc kubenswrapper[4975]: E0122 11:18:53.206228 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12\": container with ID starting with 0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12 not found: ID does not exist" containerID="0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.206287 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12"} err="failed to get container status \"0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12\": rpc error: code = NotFound desc = could not find container \"0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12\": container with ID starting with 0217d60459605f464dd68dd8d39351fe861f215105edc20d226d44f8b717ca12 not found: ID does not exist" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.206325 4975 scope.go:117] "RemoveContainer" containerID="da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40" Jan 22 11:18:53 crc kubenswrapper[4975]: E0122 11:18:53.207145 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40\": container with ID starting with da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40 not found: ID does not exist" containerID="da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.207171 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40"} err="failed to get container status \"da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40\": rpc error: code = NotFound desc = could not find container \"da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40\": container with ID starting with da6ef5dfe8587956610e3c7729216553ac1f1e38c6ad8d69198a96a8aaddec40 not found: ID does not exist" Jan 22 11:18:53 crc kubenswrapper[4975]: I0122 11:18:53.775726 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" path="/var/lib/kubelet/pods/fef95bfe-411c-4957-91df-8a85f2258c38/volumes" Jan 22 11:19:03 crc kubenswrapper[4975]: I0122 11:19:03.755002 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:19:03 crc kubenswrapper[4975]: E0122 11:19:03.755891 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:19:17 crc kubenswrapper[4975]: I0122 11:19:17.763075 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:19:17 crc kubenswrapper[4975]: E0122 11:19:17.764535 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:19:31 crc kubenswrapper[4975]: I0122 11:19:31.755918 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:19:31 crc kubenswrapper[4975]: E0122 11:19:31.756848 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:19:43 crc kubenswrapper[4975]: I0122 11:19:43.754954 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:19:43 crc kubenswrapper[4975]: E0122 11:19:43.756251 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:19:58 crc kubenswrapper[4975]: I0122 11:19:58.755775 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:19:59 crc kubenswrapper[4975]: I0122 11:19:59.832473 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"9f0a912f5dff3bb42ae66c461ff3c95472abf684a05047d754195ea06d777ef4"} Jan 22 11:21:17 crc kubenswrapper[4975]: I0122 11:21:17.674408 4975 generic.go:334] "Generic (PLEG): container finished" podID="f8154d48-3ee4-45f3-bf77-45befe6cdc3a" containerID="35c3e2f537ff4962af8428765c5b61fe8e1b31f1b80ccdae44a9fb2c2b18e395" exitCode=0 Jan 22 11:21:17 crc kubenswrapper[4975]: I0122 11:21:17.674535 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" event={"ID":"f8154d48-3ee4-45f3-bf77-45befe6cdc3a","Type":"ContainerDied","Data":"35c3e2f537ff4962af8428765c5b61fe8e1b31f1b80ccdae44a9fb2c2b18e395"} Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.266999 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.430407 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-1\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.430481 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-inventory\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.430503 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-ssh-key-openstack-edpm-ipam\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.430577 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-extra-config-0\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.430601 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-1\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.431074 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-0\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.431440 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-0\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.431480 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm6tl\" (UniqueName: \"kubernetes.io/projected/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-kube-api-access-jm6tl\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.431508 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-combined-ca-bundle\") pod \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\" (UID: \"f8154d48-3ee4-45f3-bf77-45befe6cdc3a\") " Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.454696 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.458231 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.458527 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.458898 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.460052 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.462586 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-kube-api-access-jm6tl" (OuterVolumeSpecName: "kube-api-access-jm6tl") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "kube-api-access-jm6tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.464288 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.464966 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-inventory" (OuterVolumeSpecName: "inventory") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.474722 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8154d48-3ee4-45f3-bf77-45befe6cdc3a" (UID: "f8154d48-3ee4-45f3-bf77-45befe6cdc3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533891 4975 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533918 4975 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533929 4975 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533938 4975 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533946 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm6tl\" (UniqueName: \"kubernetes.io/projected/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-kube-api-access-jm6tl\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533954 4975 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533963 4975 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533972 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.533979 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8154d48-3ee4-45f3-bf77-45befe6cdc3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.696565 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" event={"ID":"f8154d48-3ee4-45f3-bf77-45befe6cdc3a","Type":"ContainerDied","Data":"98de8dea7082e8f95c6ff217a015353dec858c832481b43995aa2758528d5a79"} Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.696612 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98de8dea7082e8f95c6ff217a015353dec858c832481b43995aa2758528d5a79" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.696671 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-c25vl" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.833696 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs"] Jan 22 11:21:19 crc kubenswrapper[4975]: E0122 11:21:19.834151 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8154d48-3ee4-45f3-bf77-45befe6cdc3a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.834171 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8154d48-3ee4-45f3-bf77-45befe6cdc3a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 11:21:19 crc kubenswrapper[4975]: E0122 11:21:19.834202 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="registry-server" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.834212 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="registry-server" Jan 22 11:21:19 crc kubenswrapper[4975]: E0122 11:21:19.834239 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="extract-content" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.834247 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="extract-content" Jan 22 11:21:19 crc kubenswrapper[4975]: E0122 11:21:19.834262 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="extract-utilities" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.834272 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="extract-utilities" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.834484 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8154d48-3ee4-45f3-bf77-45befe6cdc3a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.834507 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef95bfe-411c-4957-91df-8a85f2258c38" containerName="registry-server" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.835396 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.841985 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.842213 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dl9l4" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.842271 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.842391 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.842888 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843496 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843553 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843590 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843614 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843643 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843673 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqs6\" (UniqueName: \"kubernetes.io/projected/89ca7561-30de-4093-8f2f-09455a740f94-kube-api-access-txqs6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.843796 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.856044 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs"] Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.945194 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.945503 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.945677 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.945794 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.945898 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.945999 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.946144 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqs6\" (UniqueName: \"kubernetes.io/projected/89ca7561-30de-4093-8f2f-09455a740f94-kube-api-access-txqs6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.949000 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.949176 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.949212 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.949596 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.953271 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.960535 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:19 crc kubenswrapper[4975]: I0122 11:21:19.967835 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqs6\" (UniqueName: \"kubernetes.io/projected/89ca7561-30de-4093-8f2f-09455a740f94-kube-api-access-txqs6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j5frs\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:20 crc kubenswrapper[4975]: I0122 11:21:20.168210 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:21:20 crc kubenswrapper[4975]: I0122 11:21:20.717250 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs"] Jan 22 11:21:21 crc kubenswrapper[4975]: I0122 11:21:21.713737 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" event={"ID":"89ca7561-30de-4093-8f2f-09455a740f94","Type":"ContainerStarted","Data":"4ca852f5971ae6bfd5d08bb0495d9bbbe277071dd5c2d15c8dc0cc0ab29c0225"} Jan 22 11:21:21 crc kubenswrapper[4975]: I0122 11:21:21.714161 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" event={"ID":"89ca7561-30de-4093-8f2f-09455a740f94","Type":"ContainerStarted","Data":"2e36f258a8291eef16e335ff3604cf5beaa3b94fb4a47ffea7b2902ba95326ac"} Jan 22 11:21:21 crc kubenswrapper[4975]: I0122 11:21:21.735918 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" podStartSLOduration=2.267659282 podStartE2EDuration="2.735899236s" podCreationTimestamp="2026-01-22 11:21:19 +0000 UTC" firstStartedPulling="2026-01-22 11:21:20.719541459 +0000 UTC m=+2673.377577579" lastFinishedPulling="2026-01-22 11:21:21.187781413 +0000 UTC m=+2673.845817533" observedRunningTime="2026-01-22 11:21:21.728105226 +0000 UTC m=+2674.386141326" watchObservedRunningTime="2026-01-22 11:21:21.735899236 +0000 UTC m=+2674.393935346" Jan 22 11:22:27 crc kubenswrapper[4975]: I0122 11:22:27.436498 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:22:27 crc kubenswrapper[4975]: I0122 11:22:27.437190 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:22:57 crc kubenswrapper[4975]: I0122 11:22:57.436411 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:22:57 crc kubenswrapper[4975]: I0122 11:22:57.437249 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:23:27 crc kubenswrapper[4975]: I0122 11:23:27.436473 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:23:27 crc kubenswrapper[4975]: I0122 11:23:27.437116 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:23:27 crc kubenswrapper[4975]: I0122 11:23:27.437165 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:23:27 crc kubenswrapper[4975]: I0122 11:23:27.438107 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f0a912f5dff3bb42ae66c461ff3c95472abf684a05047d754195ea06d777ef4"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:23:27 crc kubenswrapper[4975]: I0122 11:23:27.438167 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://9f0a912f5dff3bb42ae66c461ff3c95472abf684a05047d754195ea06d777ef4" gracePeriod=600 Jan 22 11:23:28 crc kubenswrapper[4975]: I0122 11:23:28.148302 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="9f0a912f5dff3bb42ae66c461ff3c95472abf684a05047d754195ea06d777ef4" exitCode=0 Jan 22 11:23:28 crc kubenswrapper[4975]: I0122 11:23:28.148365 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"9f0a912f5dff3bb42ae66c461ff3c95472abf684a05047d754195ea06d777ef4"} Jan 22 11:23:28 crc kubenswrapper[4975]: I0122 11:23:28.148758 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594"} Jan 22 11:23:28 crc kubenswrapper[4975]: I0122 11:23:28.148787 4975 scope.go:117] "RemoveContainer" containerID="530d1abc3cbe291dcb4d45ee8b9ad1725baad14f21b0532afff96aa754eaf5a7" Jan 22 11:24:07 crc kubenswrapper[4975]: I0122 11:24:07.594667 4975 generic.go:334] "Generic (PLEG): container finished" podID="89ca7561-30de-4093-8f2f-09455a740f94" containerID="4ca852f5971ae6bfd5d08bb0495d9bbbe277071dd5c2d15c8dc0cc0ab29c0225" exitCode=0 Jan 22 11:24:07 crc kubenswrapper[4975]: I0122 11:24:07.594801 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" event={"ID":"89ca7561-30de-4093-8f2f-09455a740f94","Type":"ContainerDied","Data":"4ca852f5971ae6bfd5d08bb0495d9bbbe277071dd5c2d15c8dc0cc0ab29c0225"} Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.032968 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.168747 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqs6\" (UniqueName: \"kubernetes.io/projected/89ca7561-30de-4093-8f2f-09455a740f94-kube-api-access-txqs6\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.168815 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-2\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.168942 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ssh-key-openstack-edpm-ipam\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.168990 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-inventory\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.169010 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-telemetry-combined-ca-bundle\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.169026 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-1\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.169078 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-0\") pod \"89ca7561-30de-4093-8f2f-09455a740f94\" (UID: \"89ca7561-30de-4093-8f2f-09455a740f94\") " Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.174517 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.175416 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ca7561-30de-4093-8f2f-09455a740f94-kube-api-access-txqs6" (OuterVolumeSpecName: "kube-api-access-txqs6") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "kube-api-access-txqs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.195437 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.196894 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-inventory" (OuterVolumeSpecName: "inventory") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.204659 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.211104 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.225975 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89ca7561-30de-4093-8f2f-09455a740f94" (UID: "89ca7561-30de-4093-8f2f-09455a740f94"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271192 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271246 4975 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271256 4975 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271284 4975 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271354 4975 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271366 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqs6\" (UniqueName: \"kubernetes.io/projected/89ca7561-30de-4093-8f2f-09455a740f94-kube-api-access-txqs6\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.271374 4975 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/89ca7561-30de-4093-8f2f-09455a740f94-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.634252 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" event={"ID":"89ca7561-30de-4093-8f2f-09455a740f94","Type":"ContainerDied","Data":"2e36f258a8291eef16e335ff3604cf5beaa3b94fb4a47ffea7b2902ba95326ac"} Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.634422 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e36f258a8291eef16e335ff3604cf5beaa3b94fb4a47ffea7b2902ba95326ac" Jan 22 11:24:09 crc kubenswrapper[4975]: I0122 11:24:09.634435 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j5frs" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.571083 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kvhkn"] Jan 22 11:24:13 crc kubenswrapper[4975]: E0122 11:24:13.572148 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ca7561-30de-4093-8f2f-09455a740f94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.572174 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ca7561-30de-4093-8f2f-09455a740f94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.572580 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ca7561-30de-4093-8f2f-09455a740f94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.575067 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.597171 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvhkn"] Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.685770 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-utilities\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.685878 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdhd\" (UniqueName: \"kubernetes.io/projected/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-kube-api-access-shdhd\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.685912 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-catalog-content\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.787097 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-utilities\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.787277 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdhd\" (UniqueName: \"kubernetes.io/projected/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-kube-api-access-shdhd\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.787362 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-catalog-content\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.788174 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-catalog-content\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.788184 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-utilities\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.819472 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdhd\" (UniqueName: \"kubernetes.io/projected/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-kube-api-access-shdhd\") pod \"community-operators-kvhkn\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:13 crc kubenswrapper[4975]: I0122 11:24:13.905871 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:14 crc kubenswrapper[4975]: I0122 11:24:14.514558 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvhkn"] Jan 22 11:24:14 crc kubenswrapper[4975]: I0122 11:24:14.692671 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerStarted","Data":"aef90754ba5c8eaccc82438843688016095f144824ba97198e174aa139af61b9"} Jan 22 11:24:15 crc kubenswrapper[4975]: I0122 11:24:15.705157 4975 generic.go:334] "Generic (PLEG): container finished" podID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerID="634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7" exitCode=0 Jan 22 11:24:15 crc kubenswrapper[4975]: I0122 11:24:15.705202 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerDied","Data":"634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7"} Jan 22 11:24:15 crc kubenswrapper[4975]: I0122 11:24:15.708616 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:24:16 crc kubenswrapper[4975]: I0122 11:24:16.718083 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerStarted","Data":"3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd"} Jan 22 11:24:17 crc kubenswrapper[4975]: I0122 11:24:17.730286 4975 generic.go:334] "Generic (PLEG): container finished" podID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerID="3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd" exitCode=0 Jan 22 11:24:17 crc kubenswrapper[4975]: I0122 11:24:17.730355 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerDied","Data":"3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd"} Jan 22 11:24:18 crc kubenswrapper[4975]: I0122 11:24:18.740947 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerStarted","Data":"5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0"} Jan 22 11:24:18 crc kubenswrapper[4975]: I0122 11:24:18.770233 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kvhkn" podStartSLOduration=3.320336169 podStartE2EDuration="5.770211728s" podCreationTimestamp="2026-01-22 11:24:13 +0000 UTC" firstStartedPulling="2026-01-22 11:24:15.708389648 +0000 UTC m=+2848.366425758" lastFinishedPulling="2026-01-22 11:24:18.158265177 +0000 UTC m=+2850.816301317" observedRunningTime="2026-01-22 11:24:18.762015803 +0000 UTC m=+2851.420051913" watchObservedRunningTime="2026-01-22 11:24:18.770211728 +0000 UTC m=+2851.428247838" Jan 22 11:24:23 crc kubenswrapper[4975]: I0122 11:24:23.906415 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:23 crc kubenswrapper[4975]: I0122 11:24:23.906913 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:23 crc kubenswrapper[4975]: I0122 11:24:23.989998 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:24 crc kubenswrapper[4975]: I0122 11:24:24.873183 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:24 crc kubenswrapper[4975]: I0122 11:24:24.934571 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvhkn"] Jan 22 11:24:26 crc kubenswrapper[4975]: I0122 11:24:26.823034 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kvhkn" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="registry-server" containerID="cri-o://5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0" gracePeriod=2 Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.449179 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.584276 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-catalog-content\") pod \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.584346 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shdhd\" (UniqueName: \"kubernetes.io/projected/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-kube-api-access-shdhd\") pod \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.584437 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-utilities\") pod \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\" (UID: \"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0\") " Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.585970 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-utilities" (OuterVolumeSpecName: "utilities") pod "f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" (UID: "f6de758f-6d4d-4a7f-80e4-8aa43102ddc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.591667 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-kube-api-access-shdhd" (OuterVolumeSpecName: "kube-api-access-shdhd") pod "f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" (UID: "f6de758f-6d4d-4a7f-80e4-8aa43102ddc0"). InnerVolumeSpecName "kube-api-access-shdhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.656745 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" (UID: "f6de758f-6d4d-4a7f-80e4-8aa43102ddc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.687542 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.687594 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.687605 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shdhd\" (UniqueName: \"kubernetes.io/projected/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0-kube-api-access-shdhd\") on node \"crc\" DevicePath \"\"" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.838658 4975 generic.go:334] "Generic (PLEG): container finished" podID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerID="5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0" exitCode=0 Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.838719 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerDied","Data":"5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0"} Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.838762 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvhkn" event={"ID":"f6de758f-6d4d-4a7f-80e4-8aa43102ddc0","Type":"ContainerDied","Data":"aef90754ba5c8eaccc82438843688016095f144824ba97198e174aa139af61b9"} Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.838794 4975 scope.go:117] "RemoveContainer" containerID="5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.838804 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvhkn" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.879900 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvhkn"] Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.884239 4975 scope.go:117] "RemoveContainer" containerID="3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.909394 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kvhkn"] Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.921486 4975 scope.go:117] "RemoveContainer" containerID="634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.991845 4975 scope.go:117] "RemoveContainer" containerID="5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0" Jan 22 11:24:27 crc kubenswrapper[4975]: E0122 11:24:27.992155 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0\": container with ID starting with 5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0 not found: ID does not exist" containerID="5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.992188 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0"} err="failed to get container status \"5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0\": rpc error: code = NotFound desc = could not find container \"5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0\": container with ID starting with 5b79327294236ce933082dc6c6e69593d998fd973580abeb51890102db1e28d0 not found: ID does not exist" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.992210 4975 scope.go:117] "RemoveContainer" containerID="3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd" Jan 22 11:24:27 crc kubenswrapper[4975]: E0122 11:24:27.992419 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd\": container with ID starting with 3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd not found: ID does not exist" containerID="3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.992443 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd"} err="failed to get container status \"3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd\": rpc error: code = NotFound desc = could not find container \"3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd\": container with ID starting with 3ac2e44234bfa9997908a14957ab61d046c1540cb0d434fa00ae499512714fdd not found: ID does not exist" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.992456 4975 scope.go:117] "RemoveContainer" containerID="634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7" Jan 22 11:24:27 crc kubenswrapper[4975]: E0122 11:24:27.992623 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7\": container with ID starting with 634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7 not found: ID does not exist" containerID="634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7" Jan 22 11:24:27 crc kubenswrapper[4975]: I0122 11:24:27.992651 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7"} err="failed to get container status \"634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7\": rpc error: code = NotFound desc = could not find container \"634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7\": container with ID starting with 634e2714d8963bd086fd5a730a42ece32b658f7e7ffd9634d1729d77f0f9deb7 not found: ID does not exist" Jan 22 11:24:29 crc kubenswrapper[4975]: I0122 11:24:29.772545 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" path="/var/lib/kubelet/pods/f6de758f-6d4d-4a7f-80e4-8aa43102ddc0/volumes" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.938302 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 11:24:51 crc kubenswrapper[4975]: E0122 11:24:51.939283 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="extract-utilities" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.939299 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="extract-utilities" Jan 22 11:24:51 crc kubenswrapper[4975]: E0122 11:24:51.939312 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="extract-content" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.939320 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="extract-content" Jan 22 11:24:51 crc kubenswrapper[4975]: E0122 11:24:51.939360 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="registry-server" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.939367 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="registry-server" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.939569 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6de758f-6d4d-4a7f-80e4-8aa43102ddc0" containerName="registry-server" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.940217 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.943743 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-25ssx" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.944895 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.945518 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.945846 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 11:24:51 crc kubenswrapper[4975]: I0122 11:24:51.955317 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.138974 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139175 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139435 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-config-data\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139555 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139593 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139702 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmc7r\" (UniqueName: \"kubernetes.io/projected/2401e7da-ce1d-45c2-b62e-bd617338ac17-kube-api-access-jmc7r\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139818 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139936 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.139989 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242055 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242129 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242181 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmc7r\" (UniqueName: \"kubernetes.io/projected/2401e7da-ce1d-45c2-b62e-bd617338ac17-kube-api-access-jmc7r\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242263 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242325 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242408 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242592 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242625 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.242676 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-config-data\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.243667 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.244160 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.244287 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.245573 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.245813 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-config-data\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.251829 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.253133 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.254709 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.272745 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmc7r\" (UniqueName: \"kubernetes.io/projected/2401e7da-ce1d-45c2-b62e-bd617338ac17-kube-api-access-jmc7r\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.292297 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " pod="openstack/tempest-tests-tempest" Jan 22 11:24:52 crc kubenswrapper[4975]: I0122 11:24:52.567778 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 11:24:53 crc kubenswrapper[4975]: I0122 11:24:53.081420 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 11:24:53 crc kubenswrapper[4975]: I0122 11:24:53.132652 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2401e7da-ce1d-45c2-b62e-bd617338ac17","Type":"ContainerStarted","Data":"848451e9beaf8a4614b68da91bc0816605a76b46f06ccd0ab37769f1de13b0f0"} Jan 22 11:25:24 crc kubenswrapper[4975]: E0122 11:25:24.972273 4975 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 22 11:25:24 crc kubenswrapper[4975]: E0122 11:25:24.973046 4975 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmc7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2401e7da-ce1d-45c2-b62e-bd617338ac17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 11:25:24 crc kubenswrapper[4975]: E0122 11:25:24.975058 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2401e7da-ce1d-45c2-b62e-bd617338ac17" Jan 22 11:25:25 crc kubenswrapper[4975]: E0122 11:25:25.454885 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2401e7da-ce1d-45c2-b62e-bd617338ac17" Jan 22 11:25:27 crc kubenswrapper[4975]: I0122 11:25:27.437062 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:25:27 crc kubenswrapper[4975]: I0122 11:25:27.437435 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:25:30 crc kubenswrapper[4975]: I0122 11:25:30.883270 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bkzsx"] Jan 22 11:25:30 crc kubenswrapper[4975]: I0122 11:25:30.885535 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:30 crc kubenswrapper[4975]: I0122 11:25:30.904406 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkzsx"] Jan 22 11:25:30 crc kubenswrapper[4975]: I0122 11:25:30.986477 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z949q\" (UniqueName: \"kubernetes.io/projected/ad856936-2279-47ea-bddb-384b2d4ddf64-kube-api-access-z949q\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:30 crc kubenswrapper[4975]: I0122 11:25:30.986643 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-catalog-content\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:30 crc kubenswrapper[4975]: I0122 11:25:30.986691 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-utilities\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.088090 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z949q\" (UniqueName: \"kubernetes.io/projected/ad856936-2279-47ea-bddb-384b2d4ddf64-kube-api-access-z949q\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.088457 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-catalog-content\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.088596 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-utilities\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.088924 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-catalog-content\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.089005 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-utilities\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.110572 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z949q\" (UniqueName: \"kubernetes.io/projected/ad856936-2279-47ea-bddb-384b2d4ddf64-kube-api-access-z949q\") pod \"redhat-marketplace-bkzsx\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.213955 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:31 crc kubenswrapper[4975]: I0122 11:25:31.696597 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkzsx"] Jan 22 11:25:32 crc kubenswrapper[4975]: I0122 11:25:32.518515 4975 generic.go:334] "Generic (PLEG): container finished" podID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerID="abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1" exitCode=0 Jan 22 11:25:32 crc kubenswrapper[4975]: I0122 11:25:32.518566 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkzsx" event={"ID":"ad856936-2279-47ea-bddb-384b2d4ddf64","Type":"ContainerDied","Data":"abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1"} Jan 22 11:25:32 crc kubenswrapper[4975]: I0122 11:25:32.519558 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkzsx" event={"ID":"ad856936-2279-47ea-bddb-384b2d4ddf64","Type":"ContainerStarted","Data":"c8ecef127e800c2ac4b3a60dfe7a7e2edfec7db152a4b13ef9c3fd2b8849dcaa"} Jan 22 11:25:33 crc kubenswrapper[4975]: I0122 11:25:33.534133 4975 generic.go:334] "Generic (PLEG): container finished" podID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerID="a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f" exitCode=0 Jan 22 11:25:33 crc kubenswrapper[4975]: I0122 11:25:33.534474 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkzsx" event={"ID":"ad856936-2279-47ea-bddb-384b2d4ddf64","Type":"ContainerDied","Data":"a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f"} Jan 22 11:25:33 crc kubenswrapper[4975]: E0122 11:25:33.574942 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad856936_2279_47ea_bddb_384b2d4ddf64.slice/crio-conmon-a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:25:34 crc kubenswrapper[4975]: I0122 11:25:34.545343 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkzsx" event={"ID":"ad856936-2279-47ea-bddb-384b2d4ddf64","Type":"ContainerStarted","Data":"566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430"} Jan 22 11:25:34 crc kubenswrapper[4975]: I0122 11:25:34.573488 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bkzsx" podStartSLOduration=3.180836307 podStartE2EDuration="4.573468301s" podCreationTimestamp="2026-01-22 11:25:30 +0000 UTC" firstStartedPulling="2026-01-22 11:25:32.520271388 +0000 UTC m=+2925.178307498" lastFinishedPulling="2026-01-22 11:25:33.912903382 +0000 UTC m=+2926.570939492" observedRunningTime="2026-01-22 11:25:34.564428513 +0000 UTC m=+2927.222464653" watchObservedRunningTime="2026-01-22 11:25:34.573468301 +0000 UTC m=+2927.231504411" Jan 22 11:25:40 crc kubenswrapper[4975]: I0122 11:25:40.611021 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2401e7da-ce1d-45c2-b62e-bd617338ac17","Type":"ContainerStarted","Data":"29e924265803350900ebfe079665fb6768d5bc24d4561f2b937961a7f4e1910b"} Jan 22 11:25:40 crc kubenswrapper[4975]: I0122 11:25:40.647228 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.270260274 podStartE2EDuration="50.647196238s" podCreationTimestamp="2026-01-22 11:24:50 +0000 UTC" firstStartedPulling="2026-01-22 11:24:53.079123064 +0000 UTC m=+2885.737159184" lastFinishedPulling="2026-01-22 11:25:39.456059048 +0000 UTC m=+2932.114095148" observedRunningTime="2026-01-22 11:25:40.636673148 +0000 UTC m=+2933.294709268" watchObservedRunningTime="2026-01-22 11:25:40.647196238 +0000 UTC m=+2933.305232388" Jan 22 11:25:41 crc kubenswrapper[4975]: I0122 11:25:41.214357 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:41 crc kubenswrapper[4975]: I0122 11:25:41.214671 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:41 crc kubenswrapper[4975]: I0122 11:25:41.260272 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:41 crc kubenswrapper[4975]: I0122 11:25:41.683720 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:41 crc kubenswrapper[4975]: I0122 11:25:41.733832 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkzsx"] Jan 22 11:25:43 crc kubenswrapper[4975]: I0122 11:25:43.641456 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bkzsx" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="registry-server" containerID="cri-o://566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430" gracePeriod=2 Jan 22 11:25:43 crc kubenswrapper[4975]: E0122 11:25:43.798632 4975 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad856936_2279_47ea_bddb_384b2d4ddf64.slice/crio-566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.376743 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.471654 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z949q\" (UniqueName: \"kubernetes.io/projected/ad856936-2279-47ea-bddb-384b2d4ddf64-kube-api-access-z949q\") pod \"ad856936-2279-47ea-bddb-384b2d4ddf64\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.471809 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-utilities\") pod \"ad856936-2279-47ea-bddb-384b2d4ddf64\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.471833 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-catalog-content\") pod \"ad856936-2279-47ea-bddb-384b2d4ddf64\" (UID: \"ad856936-2279-47ea-bddb-384b2d4ddf64\") " Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.473008 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-utilities" (OuterVolumeSpecName: "utilities") pod "ad856936-2279-47ea-bddb-384b2d4ddf64" (UID: "ad856936-2279-47ea-bddb-384b2d4ddf64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.478595 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad856936-2279-47ea-bddb-384b2d4ddf64-kube-api-access-z949q" (OuterVolumeSpecName: "kube-api-access-z949q") pod "ad856936-2279-47ea-bddb-384b2d4ddf64" (UID: "ad856936-2279-47ea-bddb-384b2d4ddf64"). InnerVolumeSpecName "kube-api-access-z949q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.502650 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad856936-2279-47ea-bddb-384b2d4ddf64" (UID: "ad856936-2279-47ea-bddb-384b2d4ddf64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.575304 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.575388 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad856936-2279-47ea-bddb-384b2d4ddf64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.575411 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z949q\" (UniqueName: \"kubernetes.io/projected/ad856936-2279-47ea-bddb-384b2d4ddf64-kube-api-access-z949q\") on node \"crc\" DevicePath \"\"" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.652524 4975 generic.go:334] "Generic (PLEG): container finished" podID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerID="566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430" exitCode=0 Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.652588 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkzsx" event={"ID":"ad856936-2279-47ea-bddb-384b2d4ddf64","Type":"ContainerDied","Data":"566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430"} Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.652622 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkzsx" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.652654 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkzsx" event={"ID":"ad856936-2279-47ea-bddb-384b2d4ddf64","Type":"ContainerDied","Data":"c8ecef127e800c2ac4b3a60dfe7a7e2edfec7db152a4b13ef9c3fd2b8849dcaa"} Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.652688 4975 scope.go:117] "RemoveContainer" containerID="566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.689835 4975 scope.go:117] "RemoveContainer" containerID="a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.724511 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkzsx"] Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.738822 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkzsx"] Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.743894 4975 scope.go:117] "RemoveContainer" containerID="abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.799787 4975 scope.go:117] "RemoveContainer" containerID="566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430" Jan 22 11:25:44 crc kubenswrapper[4975]: E0122 11:25:44.800205 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430\": container with ID starting with 566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430 not found: ID does not exist" containerID="566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.800233 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430"} err="failed to get container status \"566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430\": rpc error: code = NotFound desc = could not find container \"566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430\": container with ID starting with 566c09cab1e35a45691e00929538f62fd5b88b09f5f4d90d6934e4b6e8476430 not found: ID does not exist" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.800255 4975 scope.go:117] "RemoveContainer" containerID="a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f" Jan 22 11:25:44 crc kubenswrapper[4975]: E0122 11:25:44.800667 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f\": container with ID starting with a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f not found: ID does not exist" containerID="a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.800685 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f"} err="failed to get container status \"a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f\": rpc error: code = NotFound desc = could not find container \"a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f\": container with ID starting with a4554be3a7305a5ea1d6f86e971424920294c314807f8eb2b46b3102f292229f not found: ID does not exist" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.800702 4975 scope.go:117] "RemoveContainer" containerID="abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1" Jan 22 11:25:44 crc kubenswrapper[4975]: E0122 11:25:44.801031 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1\": container with ID starting with abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1 not found: ID does not exist" containerID="abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1" Jan 22 11:25:44 crc kubenswrapper[4975]: I0122 11:25:44.801049 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1"} err="failed to get container status \"abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1\": rpc error: code = NotFound desc = could not find container \"abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1\": container with ID starting with abf2d9301ac20c0dd10d48f33f231f4919b7bf5b05a33315bfab4358ef3128e1 not found: ID does not exist" Jan 22 11:25:45 crc kubenswrapper[4975]: I0122 11:25:45.767141 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" path="/var/lib/kubelet/pods/ad856936-2279-47ea-bddb-384b2d4ddf64/volumes" Jan 22 11:25:57 crc kubenswrapper[4975]: I0122 11:25:57.435842 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:25:57 crc kubenswrapper[4975]: I0122 11:25:57.436436 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:26:27 crc kubenswrapper[4975]: I0122 11:26:27.437009 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:26:27 crc kubenswrapper[4975]: I0122 11:26:27.437815 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:26:27 crc kubenswrapper[4975]: I0122 11:26:27.437887 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:26:27 crc kubenswrapper[4975]: I0122 11:26:27.439013 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:26:27 crc kubenswrapper[4975]: I0122 11:26:27.439121 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" gracePeriod=600 Jan 22 11:26:27 crc kubenswrapper[4975]: E0122 11:26:27.561726 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:26:28 crc kubenswrapper[4975]: I0122 11:26:28.096949 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" exitCode=0 Jan 22 11:26:28 crc kubenswrapper[4975]: I0122 11:26:28.097009 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594"} Jan 22 11:26:28 crc kubenswrapper[4975]: I0122 11:26:28.097307 4975 scope.go:117] "RemoveContainer" containerID="9f0a912f5dff3bb42ae66c461ff3c95472abf684a05047d754195ea06d777ef4" Jan 22 11:26:28 crc kubenswrapper[4975]: I0122 11:26:28.098883 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:26:28 crc kubenswrapper[4975]: E0122 11:26:28.099505 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:26:39 crc kubenswrapper[4975]: I0122 11:26:39.756455 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:26:39 crc kubenswrapper[4975]: E0122 11:26:39.757870 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:26:50 crc kubenswrapper[4975]: I0122 11:26:50.755595 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:26:50 crc kubenswrapper[4975]: E0122 11:26:50.756386 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:27:02 crc kubenswrapper[4975]: I0122 11:27:02.755411 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:27:02 crc kubenswrapper[4975]: E0122 11:27:02.756281 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:27:15 crc kubenswrapper[4975]: I0122 11:27:15.755245 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:27:15 crc kubenswrapper[4975]: E0122 11:27:15.756113 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:27:27 crc kubenswrapper[4975]: I0122 11:27:27.764138 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:27:27 crc kubenswrapper[4975]: E0122 11:27:27.765553 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:27:41 crc kubenswrapper[4975]: I0122 11:27:41.754781 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:27:41 crc kubenswrapper[4975]: E0122 11:27:41.755618 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:27:53 crc kubenswrapper[4975]: I0122 11:27:53.755950 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:27:53 crc kubenswrapper[4975]: E0122 11:27:53.757059 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:28:06 crc kubenswrapper[4975]: I0122 11:28:06.757317 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:28:06 crc kubenswrapper[4975]: E0122 11:28:06.758589 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:28:18 crc kubenswrapper[4975]: I0122 11:28:18.755402 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:28:18 crc kubenswrapper[4975]: E0122 11:28:18.756097 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:28:30 crc kubenswrapper[4975]: I0122 11:28:30.755243 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:28:30 crc kubenswrapper[4975]: E0122 11:28:30.755953 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:28:41 crc kubenswrapper[4975]: I0122 11:28:41.754539 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:28:41 crc kubenswrapper[4975]: E0122 11:28:41.755684 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:28:55 crc kubenswrapper[4975]: I0122 11:28:55.755675 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:28:55 crc kubenswrapper[4975]: E0122 11:28:55.756362 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:29:08 crc kubenswrapper[4975]: I0122 11:29:08.754861 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:29:08 crc kubenswrapper[4975]: E0122 11:29:08.755788 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.550387 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vttft"] Jan 22 11:29:11 crc kubenswrapper[4975]: E0122 11:29:11.551381 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="registry-server" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.551397 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="registry-server" Jan 22 11:29:11 crc kubenswrapper[4975]: E0122 11:29:11.551445 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="extract-content" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.551455 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="extract-content" Jan 22 11:29:11 crc kubenswrapper[4975]: E0122 11:29:11.551478 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="extract-utilities" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.551491 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="extract-utilities" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.551765 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad856936-2279-47ea-bddb-384b2d4ddf64" containerName="registry-server" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.553418 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.560087 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vttft"] Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.590366 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-utilities\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.590484 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-catalog-content\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.590801 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pjl\" (UniqueName: \"kubernetes.io/projected/d59b9f1a-9319-476d-875a-f08014454ad7-kube-api-access-v5pjl\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.693025 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-catalog-content\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.693148 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pjl\" (UniqueName: \"kubernetes.io/projected/d59b9f1a-9319-476d-875a-f08014454ad7-kube-api-access-v5pjl\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.693173 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-utilities\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.693743 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-utilities\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.693757 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-catalog-content\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.716536 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pjl\" (UniqueName: \"kubernetes.io/projected/d59b9f1a-9319-476d-875a-f08014454ad7-kube-api-access-v5pjl\") pod \"certified-operators-vttft\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:11 crc kubenswrapper[4975]: I0122 11:29:11.891424 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:12 crc kubenswrapper[4975]: I0122 11:29:12.820783 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vttft"] Jan 22 11:29:13 crc kubenswrapper[4975]: I0122 11:29:13.808509 4975 generic.go:334] "Generic (PLEG): container finished" podID="d59b9f1a-9319-476d-875a-f08014454ad7" containerID="5ab7c4d4f6c1fa5a0306ed392271dd132d11a03f2f8dc1389cab7ad99cd11957" exitCode=0 Jan 22 11:29:13 crc kubenswrapper[4975]: I0122 11:29:13.808605 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vttft" event={"ID":"d59b9f1a-9319-476d-875a-f08014454ad7","Type":"ContainerDied","Data":"5ab7c4d4f6c1fa5a0306ed392271dd132d11a03f2f8dc1389cab7ad99cd11957"} Jan 22 11:29:13 crc kubenswrapper[4975]: I0122 11:29:13.808833 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vttft" event={"ID":"d59b9f1a-9319-476d-875a-f08014454ad7","Type":"ContainerStarted","Data":"e7fedd53504fe70e3e3c8884bad35c2edf6eb7d3cf824ef99291148ea0d70752"} Jan 22 11:29:14 crc kubenswrapper[4975]: I0122 11:29:14.819439 4975 generic.go:334] "Generic (PLEG): container finished" podID="d59b9f1a-9319-476d-875a-f08014454ad7" containerID="1a55e387776e2dea674bc65a96925c668c98edbd144205d508ea3b1dd7ffcd7b" exitCode=0 Jan 22 11:29:14 crc kubenswrapper[4975]: I0122 11:29:14.819480 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vttft" event={"ID":"d59b9f1a-9319-476d-875a-f08014454ad7","Type":"ContainerDied","Data":"1a55e387776e2dea674bc65a96925c668c98edbd144205d508ea3b1dd7ffcd7b"} Jan 22 11:29:15 crc kubenswrapper[4975]: I0122 11:29:15.829502 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vttft" event={"ID":"d59b9f1a-9319-476d-875a-f08014454ad7","Type":"ContainerStarted","Data":"300201bbe2d76c20d97438423dff4b5c1ab9a8a9179eb9f0f3bc1b38edf0d2ef"} Jan 22 11:29:15 crc kubenswrapper[4975]: I0122 11:29:15.850673 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vttft" podStartSLOduration=3.197682809 podStartE2EDuration="4.850650235s" podCreationTimestamp="2026-01-22 11:29:11 +0000 UTC" firstStartedPulling="2026-01-22 11:29:13.811729448 +0000 UTC m=+3146.469765568" lastFinishedPulling="2026-01-22 11:29:15.464696884 +0000 UTC m=+3148.122732994" observedRunningTime="2026-01-22 11:29:15.845200248 +0000 UTC m=+3148.503236378" watchObservedRunningTime="2026-01-22 11:29:15.850650235 +0000 UTC m=+3148.508686355" Jan 22 11:29:21 crc kubenswrapper[4975]: I0122 11:29:21.892094 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:21 crc kubenswrapper[4975]: I0122 11:29:21.892706 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:21 crc kubenswrapper[4975]: I0122 11:29:21.942469 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:22 crc kubenswrapper[4975]: I0122 11:29:22.948951 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:23 crc kubenswrapper[4975]: I0122 11:29:23.007255 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vttft"] Jan 22 11:29:23 crc kubenswrapper[4975]: I0122 11:29:23.755477 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:29:23 crc kubenswrapper[4975]: E0122 11:29:23.755838 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:29:24 crc kubenswrapper[4975]: I0122 11:29:24.903837 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vttft" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="registry-server" containerID="cri-o://300201bbe2d76c20d97438423dff4b5c1ab9a8a9179eb9f0f3bc1b38edf0d2ef" gracePeriod=2 Jan 22 11:29:25 crc kubenswrapper[4975]: I0122 11:29:25.917218 4975 generic.go:334] "Generic (PLEG): container finished" podID="d59b9f1a-9319-476d-875a-f08014454ad7" containerID="300201bbe2d76c20d97438423dff4b5c1ab9a8a9179eb9f0f3bc1b38edf0d2ef" exitCode=0 Jan 22 11:29:25 crc kubenswrapper[4975]: I0122 11:29:25.917272 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vttft" event={"ID":"d59b9f1a-9319-476d-875a-f08014454ad7","Type":"ContainerDied","Data":"300201bbe2d76c20d97438423dff4b5c1ab9a8a9179eb9f0f3bc1b38edf0d2ef"} Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.565360 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.743841 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-utilities\") pod \"d59b9f1a-9319-476d-875a-f08014454ad7\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.744269 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5pjl\" (UniqueName: \"kubernetes.io/projected/d59b9f1a-9319-476d-875a-f08014454ad7-kube-api-access-v5pjl\") pod \"d59b9f1a-9319-476d-875a-f08014454ad7\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.744353 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-catalog-content\") pod \"d59b9f1a-9319-476d-875a-f08014454ad7\" (UID: \"d59b9f1a-9319-476d-875a-f08014454ad7\") " Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.745022 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-utilities" (OuterVolumeSpecName: "utilities") pod "d59b9f1a-9319-476d-875a-f08014454ad7" (UID: "d59b9f1a-9319-476d-875a-f08014454ad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.754690 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59b9f1a-9319-476d-875a-f08014454ad7-kube-api-access-v5pjl" (OuterVolumeSpecName: "kube-api-access-v5pjl") pod "d59b9f1a-9319-476d-875a-f08014454ad7" (UID: "d59b9f1a-9319-476d-875a-f08014454ad7"). InnerVolumeSpecName "kube-api-access-v5pjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.806500 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d59b9f1a-9319-476d-875a-f08014454ad7" (UID: "d59b9f1a-9319-476d-875a-f08014454ad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.847036 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.847269 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5pjl\" (UniqueName: \"kubernetes.io/projected/d59b9f1a-9319-476d-875a-f08014454ad7-kube-api-access-v5pjl\") on node \"crc\" DevicePath \"\"" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.847286 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59b9f1a-9319-476d-875a-f08014454ad7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.929809 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vttft" event={"ID":"d59b9f1a-9319-476d-875a-f08014454ad7","Type":"ContainerDied","Data":"e7fedd53504fe70e3e3c8884bad35c2edf6eb7d3cf824ef99291148ea0d70752"} Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.929866 4975 scope.go:117] "RemoveContainer" containerID="300201bbe2d76c20d97438423dff4b5c1ab9a8a9179eb9f0f3bc1b38edf0d2ef" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.929887 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vttft" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.968111 4975 scope.go:117] "RemoveContainer" containerID="1a55e387776e2dea674bc65a96925c668c98edbd144205d508ea3b1dd7ffcd7b" Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.976419 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vttft"] Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.985844 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vttft"] Jan 22 11:29:26 crc kubenswrapper[4975]: I0122 11:29:26.993508 4975 scope.go:117] "RemoveContainer" containerID="5ab7c4d4f6c1fa5a0306ed392271dd132d11a03f2f8dc1389cab7ad99cd11957" Jan 22 11:29:27 crc kubenswrapper[4975]: I0122 11:29:27.772224 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" path="/var/lib/kubelet/pods/d59b9f1a-9319-476d-875a-f08014454ad7/volumes" Jan 22 11:29:38 crc kubenswrapper[4975]: I0122 11:29:38.756981 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:29:38 crc kubenswrapper[4975]: E0122 11:29:38.758308 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:29:52 crc kubenswrapper[4975]: I0122 11:29:52.755397 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:29:52 crc kubenswrapper[4975]: E0122 11:29:52.756011 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.469959 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46scg"] Jan 22 11:29:56 crc kubenswrapper[4975]: E0122 11:29:56.470589 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="registry-server" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.470603 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="registry-server" Jan 22 11:29:56 crc kubenswrapper[4975]: E0122 11:29:56.470620 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="extract-utilities" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.470627 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="extract-utilities" Jan 22 11:29:56 crc kubenswrapper[4975]: E0122 11:29:56.470649 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="extract-content" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.470655 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="extract-content" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.470826 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59b9f1a-9319-476d-875a-f08014454ad7" containerName="registry-server" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.472206 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.485403 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46scg"] Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.582078 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-utilities\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.582317 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-catalog-content\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.582876 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5tnj\" (UniqueName: \"kubernetes.io/projected/bb990d2f-d779-4da8-8926-bc7ff84fb769-kube-api-access-v5tnj\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.685411 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5tnj\" (UniqueName: \"kubernetes.io/projected/bb990d2f-d779-4da8-8926-bc7ff84fb769-kube-api-access-v5tnj\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.685505 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-utilities\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.685541 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-catalog-content\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.686107 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-catalog-content\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.686116 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-utilities\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.704748 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5tnj\" (UniqueName: \"kubernetes.io/projected/bb990d2f-d779-4da8-8926-bc7ff84fb769-kube-api-access-v5tnj\") pod \"redhat-operators-46scg\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:56 crc kubenswrapper[4975]: I0122 11:29:56.854737 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:29:57 crc kubenswrapper[4975]: I0122 11:29:57.314216 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46scg"] Jan 22 11:29:58 crc kubenswrapper[4975]: I0122 11:29:58.257456 4975 generic.go:334] "Generic (PLEG): container finished" podID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerID="6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651" exitCode=0 Jan 22 11:29:58 crc kubenswrapper[4975]: I0122 11:29:58.257763 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46scg" event={"ID":"bb990d2f-d779-4da8-8926-bc7ff84fb769","Type":"ContainerDied","Data":"6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651"} Jan 22 11:29:58 crc kubenswrapper[4975]: I0122 11:29:58.257795 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46scg" event={"ID":"bb990d2f-d779-4da8-8926-bc7ff84fb769","Type":"ContainerStarted","Data":"f530e57f2b5898ecd2740f512f61149f4f45cc0f16264a66c973b3a718ba1170"} Jan 22 11:29:58 crc kubenswrapper[4975]: I0122 11:29:58.260816 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.149414 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn"] Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.151673 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.154136 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.157738 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.159344 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn"] Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.258538 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-config-volume\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.258990 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcq85\" (UniqueName: \"kubernetes.io/projected/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-kube-api-access-jcq85\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.259018 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-secret-volume\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.275668 4975 generic.go:334] "Generic (PLEG): container finished" podID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerID="6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548" exitCode=0 Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.275722 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46scg" event={"ID":"bb990d2f-d779-4da8-8926-bc7ff84fb769","Type":"ContainerDied","Data":"6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548"} Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.360766 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-config-volume\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.360849 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcq85\" (UniqueName: \"kubernetes.io/projected/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-kube-api-access-jcq85\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.360875 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-secret-volume\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.362191 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-config-volume\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.376633 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-secret-volume\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.379428 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcq85\" (UniqueName: \"kubernetes.io/projected/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-kube-api-access-jcq85\") pod \"collect-profiles-29484690-6mqfn\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.478263 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:00 crc kubenswrapper[4975]: I0122 11:30:00.985748 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn"] Jan 22 11:30:00 crc kubenswrapper[4975]: W0122 11:30:00.998614 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c13fa6e_5b9a_44b9_9f8d_2a031efa17d5.slice/crio-08f8b1c41e5ade69d4a6e2c8a7d06cb5881075cd9a98dbf650234aeb1807232a WatchSource:0}: Error finding container 08f8b1c41e5ade69d4a6e2c8a7d06cb5881075cd9a98dbf650234aeb1807232a: Status 404 returned error can't find the container with id 08f8b1c41e5ade69d4a6e2c8a7d06cb5881075cd9a98dbf650234aeb1807232a Jan 22 11:30:01 crc kubenswrapper[4975]: I0122 11:30:01.285627 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46scg" event={"ID":"bb990d2f-d779-4da8-8926-bc7ff84fb769","Type":"ContainerStarted","Data":"ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0"} Jan 22 11:30:01 crc kubenswrapper[4975]: I0122 11:30:01.286669 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" event={"ID":"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5","Type":"ContainerStarted","Data":"e58202a032fec1e46978911b3c4c5460249c0fc3211387fd1d109388afb2be21"} Jan 22 11:30:01 crc kubenswrapper[4975]: I0122 11:30:01.286695 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" event={"ID":"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5","Type":"ContainerStarted","Data":"08f8b1c41e5ade69d4a6e2c8a7d06cb5881075cd9a98dbf650234aeb1807232a"} Jan 22 11:30:01 crc kubenswrapper[4975]: I0122 11:30:01.308885 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46scg" podStartSLOduration=2.79226767 podStartE2EDuration="5.308864542s" podCreationTimestamp="2026-01-22 11:29:56 +0000 UTC" firstStartedPulling="2026-01-22 11:29:58.260565118 +0000 UTC m=+3190.918601218" lastFinishedPulling="2026-01-22 11:30:00.77716197 +0000 UTC m=+3193.435198090" observedRunningTime="2026-01-22 11:30:01.303403124 +0000 UTC m=+3193.961439254" watchObservedRunningTime="2026-01-22 11:30:01.308864542 +0000 UTC m=+3193.966900642" Jan 22 11:30:01 crc kubenswrapper[4975]: I0122 11:30:01.319940 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" podStartSLOduration=1.31992018 podStartE2EDuration="1.31992018s" podCreationTimestamp="2026-01-22 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:30:01.314251937 +0000 UTC m=+3193.972288047" watchObservedRunningTime="2026-01-22 11:30:01.31992018 +0000 UTC m=+3193.977956290" Jan 22 11:30:03 crc kubenswrapper[4975]: I0122 11:30:03.309029 4975 generic.go:334] "Generic (PLEG): container finished" podID="4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" containerID="e58202a032fec1e46978911b3c4c5460249c0fc3211387fd1d109388afb2be21" exitCode=0 Jan 22 11:30:03 crc kubenswrapper[4975]: I0122 11:30:03.309170 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" event={"ID":"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5","Type":"ContainerDied","Data":"e58202a032fec1e46978911b3c4c5460249c0fc3211387fd1d109388afb2be21"} Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.734717 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.858614 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-config-volume\") pod \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.858897 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-secret-volume\") pod \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.859009 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcq85\" (UniqueName: \"kubernetes.io/projected/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-kube-api-access-jcq85\") pod \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\" (UID: \"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5\") " Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.859135 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" (UID: "4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.859758 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.865005 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" (UID: "4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.865034 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-kube-api-access-jcq85" (OuterVolumeSpecName: "kube-api-access-jcq85") pod "4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" (UID: "4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5"). InnerVolumeSpecName "kube-api-access-jcq85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.961532 4975 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:04 crc kubenswrapper[4975]: I0122 11:30:04.961576 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcq85\" (UniqueName: \"kubernetes.io/projected/4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5-kube-api-access-jcq85\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.326124 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" event={"ID":"4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5","Type":"ContainerDied","Data":"08f8b1c41e5ade69d4a6e2c8a7d06cb5881075cd9a98dbf650234aeb1807232a"} Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.326173 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f8b1c41e5ade69d4a6e2c8a7d06cb5881075cd9a98dbf650234aeb1807232a" Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.326216 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-6mqfn" Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.403505 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk"] Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.414803 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-qsjxk"] Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.757599 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:30:05 crc kubenswrapper[4975]: E0122 11:30:05.757815 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:30:05 crc kubenswrapper[4975]: I0122 11:30:05.770950 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87614eb1-74cf-41fe-8076-ee6ddad974d4" path="/var/lib/kubelet/pods/87614eb1-74cf-41fe-8076-ee6ddad974d4/volumes" Jan 22 11:30:06 crc kubenswrapper[4975]: I0122 11:30:06.855884 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:30:06 crc kubenswrapper[4975]: I0122 11:30:06.855941 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:30:07 crc kubenswrapper[4975]: I0122 11:30:07.906000 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46scg" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="registry-server" probeResult="failure" output=< Jan 22 11:30:07 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 11:30:07 crc kubenswrapper[4975]: > Jan 22 11:30:16 crc kubenswrapper[4975]: I0122 11:30:16.910226 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:30:16 crc kubenswrapper[4975]: I0122 11:30:16.987558 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:30:17 crc kubenswrapper[4975]: I0122 11:30:17.147895 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46scg"] Jan 22 11:30:18 crc kubenswrapper[4975]: I0122 11:30:18.457207 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46scg" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="registry-server" containerID="cri-o://ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0" gracePeriod=2 Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.081710 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.163204 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5tnj\" (UniqueName: \"kubernetes.io/projected/bb990d2f-d779-4da8-8926-bc7ff84fb769-kube-api-access-v5tnj\") pod \"bb990d2f-d779-4da8-8926-bc7ff84fb769\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.163254 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-utilities\") pod \"bb990d2f-d779-4da8-8926-bc7ff84fb769\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.163272 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-catalog-content\") pod \"bb990d2f-d779-4da8-8926-bc7ff84fb769\" (UID: \"bb990d2f-d779-4da8-8926-bc7ff84fb769\") " Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.165034 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-utilities" (OuterVolumeSpecName: "utilities") pod "bb990d2f-d779-4da8-8926-bc7ff84fb769" (UID: "bb990d2f-d779-4da8-8926-bc7ff84fb769"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.171460 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb990d2f-d779-4da8-8926-bc7ff84fb769-kube-api-access-v5tnj" (OuterVolumeSpecName: "kube-api-access-v5tnj") pod "bb990d2f-d779-4da8-8926-bc7ff84fb769" (UID: "bb990d2f-d779-4da8-8926-bc7ff84fb769"). InnerVolumeSpecName "kube-api-access-v5tnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.265563 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5tnj\" (UniqueName: \"kubernetes.io/projected/bb990d2f-d779-4da8-8926-bc7ff84fb769-kube-api-access-v5tnj\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.265596 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.303734 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb990d2f-d779-4da8-8926-bc7ff84fb769" (UID: "bb990d2f-d779-4da8-8926-bc7ff84fb769"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.367292 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb990d2f-d779-4da8-8926-bc7ff84fb769-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.468628 4975 generic.go:334] "Generic (PLEG): container finished" podID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerID="ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0" exitCode=0 Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.468675 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46scg" event={"ID":"bb990d2f-d779-4da8-8926-bc7ff84fb769","Type":"ContainerDied","Data":"ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0"} Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.468712 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46scg" event={"ID":"bb990d2f-d779-4da8-8926-bc7ff84fb769","Type":"ContainerDied","Data":"f530e57f2b5898ecd2740f512f61149f4f45cc0f16264a66c973b3a718ba1170"} Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.468714 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46scg" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.468729 4975 scope.go:117] "RemoveContainer" containerID="ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.507605 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46scg"] Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.512458 4975 scope.go:117] "RemoveContainer" containerID="6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.516673 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46scg"] Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.540323 4975 scope.go:117] "RemoveContainer" containerID="6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.592062 4975 scope.go:117] "RemoveContainer" containerID="ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0" Jan 22 11:30:19 crc kubenswrapper[4975]: E0122 11:30:19.592870 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0\": container with ID starting with ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0 not found: ID does not exist" containerID="ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.592902 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0"} err="failed to get container status \"ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0\": rpc error: code = NotFound desc = could not find container \"ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0\": container with ID starting with ddba73a86a66f82e979345f043491d6e6afadecd0dfff3b74698d4210ea030e0 not found: ID does not exist" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.592922 4975 scope.go:117] "RemoveContainer" containerID="6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548" Jan 22 11:30:19 crc kubenswrapper[4975]: E0122 11:30:19.593189 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548\": container with ID starting with 6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548 not found: ID does not exist" containerID="6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.593221 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548"} err="failed to get container status \"6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548\": rpc error: code = NotFound desc = could not find container \"6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548\": container with ID starting with 6886be87a3cf72b89a9091a2594a2b68c277e4e2b95e29bad46e534d125c9548 not found: ID does not exist" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.593243 4975 scope.go:117] "RemoveContainer" containerID="6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651" Jan 22 11:30:19 crc kubenswrapper[4975]: E0122 11:30:19.593609 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651\": container with ID starting with 6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651 not found: ID does not exist" containerID="6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.593630 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651"} err="failed to get container status \"6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651\": rpc error: code = NotFound desc = could not find container \"6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651\": container with ID starting with 6e59abc00690eeeb6924ddff9b4411cc5fdd637280221563c9b245a093a5d651 not found: ID does not exist" Jan 22 11:30:19 crc kubenswrapper[4975]: I0122 11:30:19.766001 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" path="/var/lib/kubelet/pods/bb990d2f-d779-4da8-8926-bc7ff84fb769/volumes" Jan 22 11:30:20 crc kubenswrapper[4975]: I0122 11:30:20.755086 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:30:20 crc kubenswrapper[4975]: E0122 11:30:20.757095 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:30:33 crc kubenswrapper[4975]: I0122 11:30:33.894036 4975 scope.go:117] "RemoveContainer" containerID="412f122b9f563dc086ffebd6c3ea927ee5effe994d8c7aed5e4f18ead5fb17cc" Jan 22 11:30:34 crc kubenswrapper[4975]: I0122 11:30:34.756454 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:30:34 crc kubenswrapper[4975]: E0122 11:30:34.756990 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:30:47 crc kubenswrapper[4975]: I0122 11:30:47.766280 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:30:47 crc kubenswrapper[4975]: E0122 11:30:47.767218 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:31:01 crc kubenswrapper[4975]: I0122 11:31:01.755043 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:31:01 crc kubenswrapper[4975]: E0122 11:31:01.755722 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:31:16 crc kubenswrapper[4975]: I0122 11:31:16.756099 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:31:16 crc kubenswrapper[4975]: E0122 11:31:16.756745 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:31:29 crc kubenswrapper[4975]: I0122 11:31:29.755661 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:31:30 crc kubenswrapper[4975]: I0122 11:31:30.293684 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"2e348b7bf006c7144d6819d0f3f55761dd7a3e56cad989448ba2e294387bd1c1"} Jan 22 11:33:57 crc kubenswrapper[4975]: I0122 11:33:57.435944 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:33:57 crc kubenswrapper[4975]: I0122 11:33:57.436521 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:34:27 crc kubenswrapper[4975]: I0122 11:34:27.436744 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:34:27 crc kubenswrapper[4975]: I0122 11:34:27.437424 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:34:57 crc kubenswrapper[4975]: I0122 11:34:57.436659 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:34:57 crc kubenswrapper[4975]: I0122 11:34:57.437559 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:34:57 crc kubenswrapper[4975]: I0122 11:34:57.437638 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:34:57 crc kubenswrapper[4975]: I0122 11:34:57.438904 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e348b7bf006c7144d6819d0f3f55761dd7a3e56cad989448ba2e294387bd1c1"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:34:57 crc kubenswrapper[4975]: I0122 11:34:57.439034 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://2e348b7bf006c7144d6819d0f3f55761dd7a3e56cad989448ba2e294387bd1c1" gracePeriod=600 Jan 22 11:34:58 crc kubenswrapper[4975]: I0122 11:34:58.429138 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="2e348b7bf006c7144d6819d0f3f55761dd7a3e56cad989448ba2e294387bd1c1" exitCode=0 Jan 22 11:34:58 crc kubenswrapper[4975]: I0122 11:34:58.429254 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"2e348b7bf006c7144d6819d0f3f55761dd7a3e56cad989448ba2e294387bd1c1"} Jan 22 11:34:58 crc kubenswrapper[4975]: I0122 11:34:58.429725 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05"} Jan 22 11:34:58 crc kubenswrapper[4975]: I0122 11:34:58.429747 4975 scope.go:117] "RemoveContainer" containerID="9e29c9e0024743c23f9aec1aab030c5d226a532f8cbcf6ae6f0302e044655594" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.808876 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98jw9"] Jan 22 11:35:13 crc kubenswrapper[4975]: E0122 11:35:13.810615 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="registry-server" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.810667 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="registry-server" Jan 22 11:35:13 crc kubenswrapper[4975]: E0122 11:35:13.810701 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="extract-utilities" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.810710 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="extract-utilities" Jan 22 11:35:13 crc kubenswrapper[4975]: E0122 11:35:13.810722 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="extract-content" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.810731 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="extract-content" Jan 22 11:35:13 crc kubenswrapper[4975]: E0122 11:35:13.810753 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" containerName="collect-profiles" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.810761 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" containerName="collect-profiles" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.814642 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c13fa6e-5b9a-44b9-9f8d-2a031efa17d5" containerName="collect-profiles" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.814718 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb990d2f-d779-4da8-8926-bc7ff84fb769" containerName="registry-server" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.836828 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.868910 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98jw9"] Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.949142 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-catalog-content\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.950126 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-utilities\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:13 crc kubenswrapper[4975]: I0122 11:35:13.950769 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp2q8\" (UniqueName: \"kubernetes.io/projected/3c78a5db-bffc-411d-a60d-22a1556a720a-kube-api-access-tp2q8\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.053348 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-catalog-content\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.053430 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-utilities\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.053952 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp2q8\" (UniqueName: \"kubernetes.io/projected/3c78a5db-bffc-411d-a60d-22a1556a720a-kube-api-access-tp2q8\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.054154 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-utilities\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.054237 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-catalog-content\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.073106 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp2q8\" (UniqueName: \"kubernetes.io/projected/3c78a5db-bffc-411d-a60d-22a1556a720a-kube-api-access-tp2q8\") pod \"community-operators-98jw9\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.169420 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:14 crc kubenswrapper[4975]: I0122 11:35:14.736635 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98jw9"] Jan 22 11:35:14 crc kubenswrapper[4975]: W0122 11:35:14.746593 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c78a5db_bffc_411d_a60d_22a1556a720a.slice/crio-e40f45a2f7195135fe3bea7ca48901a7d4667708c87766b940fe13849f8274f0 WatchSource:0}: Error finding container e40f45a2f7195135fe3bea7ca48901a7d4667708c87766b940fe13849f8274f0: Status 404 returned error can't find the container with id e40f45a2f7195135fe3bea7ca48901a7d4667708c87766b940fe13849f8274f0 Jan 22 11:35:15 crc kubenswrapper[4975]: I0122 11:35:15.596830 4975 generic.go:334] "Generic (PLEG): container finished" podID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerID="d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a" exitCode=0 Jan 22 11:35:15 crc kubenswrapper[4975]: I0122 11:35:15.596898 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98jw9" event={"ID":"3c78a5db-bffc-411d-a60d-22a1556a720a","Type":"ContainerDied","Data":"d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a"} Jan 22 11:35:15 crc kubenswrapper[4975]: I0122 11:35:15.597223 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98jw9" event={"ID":"3c78a5db-bffc-411d-a60d-22a1556a720a","Type":"ContainerStarted","Data":"e40f45a2f7195135fe3bea7ca48901a7d4667708c87766b940fe13849f8274f0"} Jan 22 11:35:15 crc kubenswrapper[4975]: I0122 11:35:15.599914 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:35:16 crc kubenswrapper[4975]: I0122 11:35:16.612446 4975 generic.go:334] "Generic (PLEG): container finished" podID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerID="a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298" exitCode=0 Jan 22 11:35:16 crc kubenswrapper[4975]: I0122 11:35:16.612650 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98jw9" event={"ID":"3c78a5db-bffc-411d-a60d-22a1556a720a","Type":"ContainerDied","Data":"a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298"} Jan 22 11:35:17 crc kubenswrapper[4975]: I0122 11:35:17.624207 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98jw9" event={"ID":"3c78a5db-bffc-411d-a60d-22a1556a720a","Type":"ContainerStarted","Data":"b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee"} Jan 22 11:35:17 crc kubenswrapper[4975]: I0122 11:35:17.641680 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98jw9" podStartSLOduration=3.224850094 podStartE2EDuration="4.641660052s" podCreationTimestamp="2026-01-22 11:35:13 +0000 UTC" firstStartedPulling="2026-01-22 11:35:15.599476068 +0000 UTC m=+3508.257512218" lastFinishedPulling="2026-01-22 11:35:17.016286036 +0000 UTC m=+3509.674322176" observedRunningTime="2026-01-22 11:35:17.640287654 +0000 UTC m=+3510.298323764" watchObservedRunningTime="2026-01-22 11:35:17.641660052 +0000 UTC m=+3510.299696162" Jan 22 11:35:24 crc kubenswrapper[4975]: I0122 11:35:24.169878 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:24 crc kubenswrapper[4975]: I0122 11:35:24.170391 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:24 crc kubenswrapper[4975]: I0122 11:35:24.240725 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:24 crc kubenswrapper[4975]: I0122 11:35:24.777520 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:25 crc kubenswrapper[4975]: I0122 11:35:25.401767 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98jw9"] Jan 22 11:35:26 crc kubenswrapper[4975]: I0122 11:35:26.702249 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98jw9" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="registry-server" containerID="cri-o://b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee" gracePeriod=2 Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.233296 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.416527 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp2q8\" (UniqueName: \"kubernetes.io/projected/3c78a5db-bffc-411d-a60d-22a1556a720a-kube-api-access-tp2q8\") pod \"3c78a5db-bffc-411d-a60d-22a1556a720a\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.416744 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-utilities\") pod \"3c78a5db-bffc-411d-a60d-22a1556a720a\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.416829 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-catalog-content\") pod \"3c78a5db-bffc-411d-a60d-22a1556a720a\" (UID: \"3c78a5db-bffc-411d-a60d-22a1556a720a\") " Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.417783 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-utilities" (OuterVolumeSpecName: "utilities") pod "3c78a5db-bffc-411d-a60d-22a1556a720a" (UID: "3c78a5db-bffc-411d-a60d-22a1556a720a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.428508 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c78a5db-bffc-411d-a60d-22a1556a720a-kube-api-access-tp2q8" (OuterVolumeSpecName: "kube-api-access-tp2q8") pod "3c78a5db-bffc-411d-a60d-22a1556a720a" (UID: "3c78a5db-bffc-411d-a60d-22a1556a720a"). InnerVolumeSpecName "kube-api-access-tp2q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.495581 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c78a5db-bffc-411d-a60d-22a1556a720a" (UID: "3c78a5db-bffc-411d-a60d-22a1556a720a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.519036 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.519086 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c78a5db-bffc-411d-a60d-22a1556a720a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.519098 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp2q8\" (UniqueName: \"kubernetes.io/projected/3c78a5db-bffc-411d-a60d-22a1556a720a-kube-api-access-tp2q8\") on node \"crc\" DevicePath \"\"" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.720883 4975 generic.go:334] "Generic (PLEG): container finished" podID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerID="b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee" exitCode=0 Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.720951 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98jw9" event={"ID":"3c78a5db-bffc-411d-a60d-22a1556a720a","Type":"ContainerDied","Data":"b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee"} Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.721001 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98jw9" event={"ID":"3c78a5db-bffc-411d-a60d-22a1556a720a","Type":"ContainerDied","Data":"e40f45a2f7195135fe3bea7ca48901a7d4667708c87766b940fe13849f8274f0"} Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.721033 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98jw9" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.721064 4975 scope.go:117] "RemoveContainer" containerID="b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.753122 4975 scope.go:117] "RemoveContainer" containerID="a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.778428 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98jw9"] Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.787749 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98jw9"] Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.798406 4975 scope.go:117] "RemoveContainer" containerID="d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.842642 4975 scope.go:117] "RemoveContainer" containerID="b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee" Jan 22 11:35:27 crc kubenswrapper[4975]: E0122 11:35:27.843018 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee\": container with ID starting with b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee not found: ID does not exist" containerID="b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.843056 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee"} err="failed to get container status \"b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee\": rpc error: code = NotFound desc = could not find container \"b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee\": container with ID starting with b362443400f3e99bd32568858e8108c0eef3d79bd568aff37569b85ce84119ee not found: ID does not exist" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.843084 4975 scope.go:117] "RemoveContainer" containerID="a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298" Jan 22 11:35:27 crc kubenswrapper[4975]: E0122 11:35:27.843376 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298\": container with ID starting with a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298 not found: ID does not exist" containerID="a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.843401 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298"} err="failed to get container status \"a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298\": rpc error: code = NotFound desc = could not find container \"a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298\": container with ID starting with a0db54ef974fb166b5fc3fb5bf59513646c9dc3b81d4044e4514d890756b4298 not found: ID does not exist" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.843420 4975 scope.go:117] "RemoveContainer" containerID="d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a" Jan 22 11:35:27 crc kubenswrapper[4975]: E0122 11:35:27.843911 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a\": container with ID starting with d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a not found: ID does not exist" containerID="d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a" Jan 22 11:35:27 crc kubenswrapper[4975]: I0122 11:35:27.843944 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a"} err="failed to get container status \"d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a\": rpc error: code = NotFound desc = could not find container \"d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a\": container with ID starting with d8864317f84d17b9b2a313e7d4b6377b09c121f4fbe8dec4cd7d455d12d00d3a not found: ID does not exist" Jan 22 11:35:29 crc kubenswrapper[4975]: I0122 11:35:29.769920 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" path="/var/lib/kubelet/pods/3c78a5db-bffc-411d-a60d-22a1556a720a/volumes" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.116272 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5t4h7"] Jan 22 11:35:35 crc kubenswrapper[4975]: E0122 11:35:35.116925 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="extract-content" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.116938 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="extract-content" Jan 22 11:35:35 crc kubenswrapper[4975]: E0122 11:35:35.116958 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="registry-server" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.116965 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="registry-server" Jan 22 11:35:35 crc kubenswrapper[4975]: E0122 11:35:35.116990 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="extract-utilities" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.116996 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="extract-utilities" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.117158 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c78a5db-bffc-411d-a60d-22a1556a720a" containerName="registry-server" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.118849 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.143923 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t4h7"] Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.277864 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjdhl\" (UniqueName: \"kubernetes.io/projected/3228fa4f-5b42-4d5a-8307-a6c385a634cb-kube-api-access-qjdhl\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.277944 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-utilities\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.278397 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-catalog-content\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.379862 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-catalog-content\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.379952 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjdhl\" (UniqueName: \"kubernetes.io/projected/3228fa4f-5b42-4d5a-8307-a6c385a634cb-kube-api-access-qjdhl\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.379981 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-utilities\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.380494 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-utilities\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.380687 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-catalog-content\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.401538 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjdhl\" (UniqueName: \"kubernetes.io/projected/3228fa4f-5b42-4d5a-8307-a6c385a634cb-kube-api-access-qjdhl\") pod \"redhat-marketplace-5t4h7\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.439048 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:35 crc kubenswrapper[4975]: I0122 11:35:35.913748 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t4h7"] Jan 22 11:35:36 crc kubenswrapper[4975]: I0122 11:35:36.802883 4975 generic.go:334] "Generic (PLEG): container finished" podID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerID="27398635f21ad702c73a3e89f1a552f91f5633d982e0381670f25721db6a6186" exitCode=0 Jan 22 11:35:36 crc kubenswrapper[4975]: I0122 11:35:36.802935 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t4h7" event={"ID":"3228fa4f-5b42-4d5a-8307-a6c385a634cb","Type":"ContainerDied","Data":"27398635f21ad702c73a3e89f1a552f91f5633d982e0381670f25721db6a6186"} Jan 22 11:35:36 crc kubenswrapper[4975]: I0122 11:35:36.803145 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t4h7" event={"ID":"3228fa4f-5b42-4d5a-8307-a6c385a634cb","Type":"ContainerStarted","Data":"48606d9b8a8d93cf935ed8d277816069fb68300d32c36dea975fe921072cdc63"} Jan 22 11:35:37 crc kubenswrapper[4975]: I0122 11:35:37.813521 4975 generic.go:334] "Generic (PLEG): container finished" podID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerID="dcbc75d13b7560d8d3c0790efed970fe52424bea859d14e67bc5f20a96699546" exitCode=0 Jan 22 11:35:37 crc kubenswrapper[4975]: I0122 11:35:37.813634 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t4h7" event={"ID":"3228fa4f-5b42-4d5a-8307-a6c385a634cb","Type":"ContainerDied","Data":"dcbc75d13b7560d8d3c0790efed970fe52424bea859d14e67bc5f20a96699546"} Jan 22 11:35:38 crc kubenswrapper[4975]: I0122 11:35:38.827050 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t4h7" event={"ID":"3228fa4f-5b42-4d5a-8307-a6c385a634cb","Type":"ContainerStarted","Data":"3fc442e0032d0d9d13463c05cc51696e09bf92ff56a47a34aa12dccd764500cb"} Jan 22 11:35:38 crc kubenswrapper[4975]: I0122 11:35:38.848003 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5t4h7" podStartSLOduration=2.429951296 podStartE2EDuration="3.847983658s" podCreationTimestamp="2026-01-22 11:35:35 +0000 UTC" firstStartedPulling="2026-01-22 11:35:36.804358624 +0000 UTC m=+3529.462394724" lastFinishedPulling="2026-01-22 11:35:38.222390926 +0000 UTC m=+3530.880427086" observedRunningTime="2026-01-22 11:35:38.845588343 +0000 UTC m=+3531.503624453" watchObservedRunningTime="2026-01-22 11:35:38.847983658 +0000 UTC m=+3531.506019788" Jan 22 11:35:45 crc kubenswrapper[4975]: I0122 11:35:45.439601 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:45 crc kubenswrapper[4975]: I0122 11:35:45.440298 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:45 crc kubenswrapper[4975]: I0122 11:35:45.534695 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:45 crc kubenswrapper[4975]: I0122 11:35:45.948479 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:46 crc kubenswrapper[4975]: I0122 11:35:46.418076 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t4h7"] Jan 22 11:35:47 crc kubenswrapper[4975]: I0122 11:35:47.896234 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5t4h7" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="registry-server" containerID="cri-o://3fc442e0032d0d9d13463c05cc51696e09bf92ff56a47a34aa12dccd764500cb" gracePeriod=2 Jan 22 11:35:48 crc kubenswrapper[4975]: I0122 11:35:48.908134 4975 generic.go:334] "Generic (PLEG): container finished" podID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerID="3fc442e0032d0d9d13463c05cc51696e09bf92ff56a47a34aa12dccd764500cb" exitCode=0 Jan 22 11:35:48 crc kubenswrapper[4975]: I0122 11:35:48.908206 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t4h7" event={"ID":"3228fa4f-5b42-4d5a-8307-a6c385a634cb","Type":"ContainerDied","Data":"3fc442e0032d0d9d13463c05cc51696e09bf92ff56a47a34aa12dccd764500cb"} Jan 22 11:35:48 crc kubenswrapper[4975]: I0122 11:35:48.909764 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t4h7" event={"ID":"3228fa4f-5b42-4d5a-8307-a6c385a634cb","Type":"ContainerDied","Data":"48606d9b8a8d93cf935ed8d277816069fb68300d32c36dea975fe921072cdc63"} Jan 22 11:35:48 crc kubenswrapper[4975]: I0122 11:35:48.909784 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48606d9b8a8d93cf935ed8d277816069fb68300d32c36dea975fe921072cdc63" Jan 22 11:35:48 crc kubenswrapper[4975]: I0122 11:35:48.917624 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.063253 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-utilities\") pod \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.063365 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjdhl\" (UniqueName: \"kubernetes.io/projected/3228fa4f-5b42-4d5a-8307-a6c385a634cb-kube-api-access-qjdhl\") pod \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.063514 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-catalog-content\") pod \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\" (UID: \"3228fa4f-5b42-4d5a-8307-a6c385a634cb\") " Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.084035 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-utilities" (OuterVolumeSpecName: "utilities") pod "3228fa4f-5b42-4d5a-8307-a6c385a634cb" (UID: "3228fa4f-5b42-4d5a-8307-a6c385a634cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.090548 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3228fa4f-5b42-4d5a-8307-a6c385a634cb-kube-api-access-qjdhl" (OuterVolumeSpecName: "kube-api-access-qjdhl") pod "3228fa4f-5b42-4d5a-8307-a6c385a634cb" (UID: "3228fa4f-5b42-4d5a-8307-a6c385a634cb"). InnerVolumeSpecName "kube-api-access-qjdhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.101422 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3228fa4f-5b42-4d5a-8307-a6c385a634cb" (UID: "3228fa4f-5b42-4d5a-8307-a6c385a634cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.165766 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.165806 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3228fa4f-5b42-4d5a-8307-a6c385a634cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.165820 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjdhl\" (UniqueName: \"kubernetes.io/projected/3228fa4f-5b42-4d5a-8307-a6c385a634cb-kube-api-access-qjdhl\") on node \"crc\" DevicePath \"\"" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.916575 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t4h7" Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.945071 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t4h7"] Jan 22 11:35:49 crc kubenswrapper[4975]: I0122 11:35:49.951750 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t4h7"] Jan 22 11:35:51 crc kubenswrapper[4975]: I0122 11:35:51.766527 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" path="/var/lib/kubelet/pods/3228fa4f-5b42-4d5a-8307-a6c385a634cb/volumes" Jan 22 11:36:46 crc kubenswrapper[4975]: I0122 11:36:46.781606 4975 generic.go:334] "Generic (PLEG): container finished" podID="2401e7da-ce1d-45c2-b62e-bd617338ac17" containerID="29e924265803350900ebfe079665fb6768d5bc24d4561f2b937961a7f4e1910b" exitCode=0 Jan 22 11:36:46 crc kubenswrapper[4975]: I0122 11:36:46.781798 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2401e7da-ce1d-45c2-b62e-bd617338ac17","Type":"ContainerDied","Data":"29e924265803350900ebfe079665fb6768d5bc24d4561f2b937961a7f4e1910b"} Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.261557 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.327644 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config-secret\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.327999 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-temporary\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328049 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-workdir\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328111 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmc7r\" (UniqueName: \"kubernetes.io/projected/2401e7da-ce1d-45c2-b62e-bd617338ac17-kube-api-access-jmc7r\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328256 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-config-data\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328309 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ssh-key\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328455 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328480 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ca-certs\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.328512 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2401e7da-ce1d-45c2-b62e-bd617338ac17\" (UID: \"2401e7da-ce1d-45c2-b62e-bd617338ac17\") " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.329784 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-config-data" (OuterVolumeSpecName: "config-data") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.330053 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.335617 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2401e7da-ce1d-45c2-b62e-bd617338ac17-kube-api-access-jmc7r" (OuterVolumeSpecName: "kube-api-access-jmc7r") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "kube-api-access-jmc7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.336209 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.337948 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.361154 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.373042 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.374023 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.386193 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2401e7da-ce1d-45c2-b62e-bd617338ac17" (UID: "2401e7da-ce1d-45c2-b62e-bd617338ac17"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430386 4975 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430414 4975 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430423 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430432 4975 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430464 4975 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430475 4975 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2401e7da-ce1d-45c2-b62e-bd617338ac17-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430484 4975 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430494 4975 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2401e7da-ce1d-45c2-b62e-bd617338ac17-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.430503 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmc7r\" (UniqueName: \"kubernetes.io/projected/2401e7da-ce1d-45c2-b62e-bd617338ac17-kube-api-access-jmc7r\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.450072 4975 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.532476 4975 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.806723 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2401e7da-ce1d-45c2-b62e-bd617338ac17","Type":"ContainerDied","Data":"848451e9beaf8a4614b68da91bc0816605a76b46f06ccd0ab37769f1de13b0f0"} Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.806950 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848451e9beaf8a4614b68da91bc0816605a76b46f06ccd0ab37769f1de13b0f0" Jan 22 11:36:48 crc kubenswrapper[4975]: I0122 11:36:48.806825 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.812323 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 11:36:53 crc kubenswrapper[4975]: E0122 11:36:53.813465 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="extract-content" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.813494 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="extract-content" Jan 22 11:36:53 crc kubenswrapper[4975]: E0122 11:36:53.813531 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="registry-server" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.813544 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="registry-server" Jan 22 11:36:53 crc kubenswrapper[4975]: E0122 11:36:53.813583 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2401e7da-ce1d-45c2-b62e-bd617338ac17" containerName="tempest-tests-tempest-tests-runner" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.813598 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2401e7da-ce1d-45c2-b62e-bd617338ac17" containerName="tempest-tests-tempest-tests-runner" Jan 22 11:36:53 crc kubenswrapper[4975]: E0122 11:36:53.813631 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="extract-utilities" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.813647 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="extract-utilities" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.813988 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2401e7da-ce1d-45c2-b62e-bd617338ac17" containerName="tempest-tests-tempest-tests-runner" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.814049 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3228fa4f-5b42-4d5a-8307-a6c385a634cb" containerName="registry-server" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.815206 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.819070 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-25ssx" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.822596 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.960024 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5scv6\" (UniqueName: \"kubernetes.io/projected/3a80b919-f543-4c35-8432-79eb4b8c9c2a-kube-api-access-5scv6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:53 crc kubenswrapper[4975]: I0122 11:36:53.960135 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.062624 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5scv6\" (UniqueName: \"kubernetes.io/projected/3a80b919-f543-4c35-8432-79eb4b8c9c2a-kube-api-access-5scv6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.062702 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.063652 4975 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.101033 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5scv6\" (UniqueName: \"kubernetes.io/projected/3a80b919-f543-4c35-8432-79eb4b8c9c2a-kube-api-access-5scv6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.108118 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a80b919-f543-4c35-8432-79eb4b8c9c2a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.159635 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.650920 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 11:36:54 crc kubenswrapper[4975]: I0122 11:36:54.882224 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3a80b919-f543-4c35-8432-79eb4b8c9c2a","Type":"ContainerStarted","Data":"7420198fc36dffb8aedf6f30125b3220ebf76e61fddf64b23e011ad4c2c56956"} Jan 22 11:36:55 crc kubenswrapper[4975]: I0122 11:36:55.894388 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3a80b919-f543-4c35-8432-79eb4b8c9c2a","Type":"ContainerStarted","Data":"6794f75403f0a0f7d65fe149135dda8deea7365f4d73fb0536c53f4e74a980da"} Jan 22 11:36:55 crc kubenswrapper[4975]: I0122 11:36:55.913676 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.076123153 podStartE2EDuration="2.913660132s" podCreationTimestamp="2026-01-22 11:36:53 +0000 UTC" firstStartedPulling="2026-01-22 11:36:54.653680973 +0000 UTC m=+3607.311717123" lastFinishedPulling="2026-01-22 11:36:55.491217982 +0000 UTC m=+3608.149254102" observedRunningTime="2026-01-22 11:36:55.906859875 +0000 UTC m=+3608.564896045" watchObservedRunningTime="2026-01-22 11:36:55.913660132 +0000 UTC m=+3608.571696242" Jan 22 11:36:57 crc kubenswrapper[4975]: I0122 11:36:57.436318 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:36:57 crc kubenswrapper[4975]: I0122 11:36:57.436846 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.573209 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8twjj/must-gather-r8vqm"] Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.575804 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.581419 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8twjj"/"openshift-service-ca.crt" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.581632 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8twjj"/"default-dockercfg-5j6lb" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.581548 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8twjj"/"kube-root-ca.crt" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.585465 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8twjj/must-gather-r8vqm"] Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.599912 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdvx\" (UniqueName: \"kubernetes.io/projected/e962ca51-fd5b-4010-a9ff-9448189876be-kube-api-access-mkdvx\") pod \"must-gather-r8vqm\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.600006 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e962ca51-fd5b-4010-a9ff-9448189876be-must-gather-output\") pod \"must-gather-r8vqm\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.702001 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdvx\" (UniqueName: \"kubernetes.io/projected/e962ca51-fd5b-4010-a9ff-9448189876be-kube-api-access-mkdvx\") pod \"must-gather-r8vqm\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.702312 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e962ca51-fd5b-4010-a9ff-9448189876be-must-gather-output\") pod \"must-gather-r8vqm\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.702945 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e962ca51-fd5b-4010-a9ff-9448189876be-must-gather-output\") pod \"must-gather-r8vqm\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.723057 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdvx\" (UniqueName: \"kubernetes.io/projected/e962ca51-fd5b-4010-a9ff-9448189876be-kube-api-access-mkdvx\") pod \"must-gather-r8vqm\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:19 crc kubenswrapper[4975]: I0122 11:37:19.896596 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:37:20 crc kubenswrapper[4975]: W0122 11:37:20.409697 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode962ca51_fd5b_4010_a9ff_9448189876be.slice/crio-873ba7db0cbe4391471e1862f8d7bf0a73d18789d3984f765d8f894fc148a597 WatchSource:0}: Error finding container 873ba7db0cbe4391471e1862f8d7bf0a73d18789d3984f765d8f894fc148a597: Status 404 returned error can't find the container with id 873ba7db0cbe4391471e1862f8d7bf0a73d18789d3984f765d8f894fc148a597 Jan 22 11:37:20 crc kubenswrapper[4975]: I0122 11:37:20.416488 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8twjj/must-gather-r8vqm"] Jan 22 11:37:20 crc kubenswrapper[4975]: I0122 11:37:20.471314 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/must-gather-r8vqm" event={"ID":"e962ca51-fd5b-4010-a9ff-9448189876be","Type":"ContainerStarted","Data":"873ba7db0cbe4391471e1862f8d7bf0a73d18789d3984f765d8f894fc148a597"} Jan 22 11:37:26 crc kubenswrapper[4975]: I0122 11:37:26.538649 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/must-gather-r8vqm" event={"ID":"e962ca51-fd5b-4010-a9ff-9448189876be","Type":"ContainerStarted","Data":"36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3"} Jan 22 11:37:27 crc kubenswrapper[4975]: I0122 11:37:27.436425 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:37:27 crc kubenswrapper[4975]: I0122 11:37:27.436887 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:37:27 crc kubenswrapper[4975]: I0122 11:37:27.550838 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/must-gather-r8vqm" event={"ID":"e962ca51-fd5b-4010-a9ff-9448189876be","Type":"ContainerStarted","Data":"0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521"} Jan 22 11:37:27 crc kubenswrapper[4975]: I0122 11:37:27.574825 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8twjj/must-gather-r8vqm" podStartSLOduration=2.87355164 podStartE2EDuration="8.574802314s" podCreationTimestamp="2026-01-22 11:37:19 +0000 UTC" firstStartedPulling="2026-01-22 11:37:20.411796347 +0000 UTC m=+3633.069832457" lastFinishedPulling="2026-01-22 11:37:26.113046981 +0000 UTC m=+3638.771083131" observedRunningTime="2026-01-22 11:37:27.573653813 +0000 UTC m=+3640.231689973" watchObservedRunningTime="2026-01-22 11:37:27.574802314 +0000 UTC m=+3640.232838434" Jan 22 11:37:29 crc kubenswrapper[4975]: E0122 11:37:29.541451 4975 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:43940->38.102.83.51:37165: write tcp 38.102.83.51:43940->38.102.83.51:37165: write: broken pipe Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.059716 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8twjj/crc-debug-fqlrt"] Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.061287 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.112429 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b25cf814-6de7-44f4-8e8f-2674e45fae62-host\") pod \"crc-debug-fqlrt\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.112995 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl2bj\" (UniqueName: \"kubernetes.io/projected/b25cf814-6de7-44f4-8e8f-2674e45fae62-kube-api-access-nl2bj\") pod \"crc-debug-fqlrt\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.215168 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl2bj\" (UniqueName: \"kubernetes.io/projected/b25cf814-6de7-44f4-8e8f-2674e45fae62-kube-api-access-nl2bj\") pod \"crc-debug-fqlrt\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.215317 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b25cf814-6de7-44f4-8e8f-2674e45fae62-host\") pod \"crc-debug-fqlrt\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.215540 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b25cf814-6de7-44f4-8e8f-2674e45fae62-host\") pod \"crc-debug-fqlrt\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.240258 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl2bj\" (UniqueName: \"kubernetes.io/projected/b25cf814-6de7-44f4-8e8f-2674e45fae62-kube-api-access-nl2bj\") pod \"crc-debug-fqlrt\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.380793 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:37:30 crc kubenswrapper[4975]: I0122 11:37:30.582844 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" event={"ID":"b25cf814-6de7-44f4-8e8f-2674e45fae62","Type":"ContainerStarted","Data":"5543d88c6bf87e316193423650f57844bdb2668b866f9a3d6b4b3eb64b8b8069"} Jan 22 11:37:42 crc kubenswrapper[4975]: I0122 11:37:42.704838 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" event={"ID":"b25cf814-6de7-44f4-8e8f-2674e45fae62","Type":"ContainerStarted","Data":"092354d44768fb9f82b532849350dd7fd19ca0e36b70f1f33c676941ef6ab983"} Jan 22 11:37:42 crc kubenswrapper[4975]: I0122 11:37:42.728805 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" podStartSLOduration=1.627516388 podStartE2EDuration="12.728787144s" podCreationTimestamp="2026-01-22 11:37:30 +0000 UTC" firstStartedPulling="2026-01-22 11:37:30.414869987 +0000 UTC m=+3643.072906137" lastFinishedPulling="2026-01-22 11:37:41.516140783 +0000 UTC m=+3654.174176893" observedRunningTime="2026-01-22 11:37:42.718860852 +0000 UTC m=+3655.376897002" watchObservedRunningTime="2026-01-22 11:37:42.728787144 +0000 UTC m=+3655.386823254" Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.437611 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.438239 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.438298 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.439184 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.439253 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" gracePeriod=600 Jan 22 11:37:57 crc kubenswrapper[4975]: E0122 11:37:57.563649 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.837804 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" exitCode=0 Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.837874 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05"} Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.837920 4975 scope.go:117] "RemoveContainer" containerID="2e348b7bf006c7144d6819d0f3f55761dd7a3e56cad989448ba2e294387bd1c1" Jan 22 11:37:57 crc kubenswrapper[4975]: I0122 11:37:57.838864 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:37:57 crc kubenswrapper[4975]: E0122 11:37:57.839316 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:38:08 crc kubenswrapper[4975]: I0122 11:38:08.755568 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:38:08 crc kubenswrapper[4975]: E0122 11:38:08.756776 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:38:19 crc kubenswrapper[4975]: I0122 11:38:19.038849 4975 generic.go:334] "Generic (PLEG): container finished" podID="b25cf814-6de7-44f4-8e8f-2674e45fae62" containerID="092354d44768fb9f82b532849350dd7fd19ca0e36b70f1f33c676941ef6ab983" exitCode=0 Jan 22 11:38:19 crc kubenswrapper[4975]: I0122 11:38:19.039066 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" event={"ID":"b25cf814-6de7-44f4-8e8f-2674e45fae62","Type":"ContainerDied","Data":"092354d44768fb9f82b532849350dd7fd19ca0e36b70f1f33c676941ef6ab983"} Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.143567 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.190721 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8twjj/crc-debug-fqlrt"] Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.198744 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8twjj/crc-debug-fqlrt"] Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.304009 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl2bj\" (UniqueName: \"kubernetes.io/projected/b25cf814-6de7-44f4-8e8f-2674e45fae62-kube-api-access-nl2bj\") pod \"b25cf814-6de7-44f4-8e8f-2674e45fae62\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.304113 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b25cf814-6de7-44f4-8e8f-2674e45fae62-host\") pod \"b25cf814-6de7-44f4-8e8f-2674e45fae62\" (UID: \"b25cf814-6de7-44f4-8e8f-2674e45fae62\") " Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.304310 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25cf814-6de7-44f4-8e8f-2674e45fae62-host" (OuterVolumeSpecName: "host") pod "b25cf814-6de7-44f4-8e8f-2674e45fae62" (UID: "b25cf814-6de7-44f4-8e8f-2674e45fae62"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.305511 4975 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b25cf814-6de7-44f4-8e8f-2674e45fae62-host\") on node \"crc\" DevicePath \"\"" Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.309099 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25cf814-6de7-44f4-8e8f-2674e45fae62-kube-api-access-nl2bj" (OuterVolumeSpecName: "kube-api-access-nl2bj") pod "b25cf814-6de7-44f4-8e8f-2674e45fae62" (UID: "b25cf814-6de7-44f4-8e8f-2674e45fae62"). InnerVolumeSpecName "kube-api-access-nl2bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:38:20 crc kubenswrapper[4975]: I0122 11:38:20.407499 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl2bj\" (UniqueName: \"kubernetes.io/projected/b25cf814-6de7-44f4-8e8f-2674e45fae62-kube-api-access-nl2bj\") on node \"crc\" DevicePath \"\"" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.056813 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5543d88c6bf87e316193423650f57844bdb2668b866f9a3d6b4b3eb64b8b8069" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.056895 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-fqlrt" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.364979 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8twjj/crc-debug-ccxrw"] Jan 22 11:38:21 crc kubenswrapper[4975]: E0122 11:38:21.365634 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b25cf814-6de7-44f4-8e8f-2674e45fae62" containerName="container-00" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.365648 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25cf814-6de7-44f4-8e8f-2674e45fae62" containerName="container-00" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.365876 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25cf814-6de7-44f4-8e8f-2674e45fae62" containerName="container-00" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.366476 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.527121 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-host\") pod \"crc-debug-ccxrw\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.527554 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfzp\" (UniqueName: \"kubernetes.io/projected/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-kube-api-access-2rfzp\") pod \"crc-debug-ccxrw\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.629288 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-host\") pod \"crc-debug-ccxrw\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.629390 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-host\") pod \"crc-debug-ccxrw\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.629466 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfzp\" (UniqueName: \"kubernetes.io/projected/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-kube-api-access-2rfzp\") pod \"crc-debug-ccxrw\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.655560 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfzp\" (UniqueName: \"kubernetes.io/projected/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-kube-api-access-2rfzp\") pod \"crc-debug-ccxrw\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.682852 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:21 crc kubenswrapper[4975]: I0122 11:38:21.765496 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b25cf814-6de7-44f4-8e8f-2674e45fae62" path="/var/lib/kubelet/pods/b25cf814-6de7-44f4-8e8f-2674e45fae62/volumes" Jan 22 11:38:22 crc kubenswrapper[4975]: I0122 11:38:22.067818 4975 generic.go:334] "Generic (PLEG): container finished" podID="20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" containerID="4c7911fafdf88449cbe35bc75d5e83d0ffafe49892692640756b59ac994a1af4" exitCode=0 Jan 22 11:38:22 crc kubenswrapper[4975]: I0122 11:38:22.067865 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-ccxrw" event={"ID":"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9","Type":"ContainerDied","Data":"4c7911fafdf88449cbe35bc75d5e83d0ffafe49892692640756b59ac994a1af4"} Jan 22 11:38:22 crc kubenswrapper[4975]: I0122 11:38:22.068160 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-ccxrw" event={"ID":"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9","Type":"ContainerStarted","Data":"e8a321ce5468963d5077c805e48cf6da2d7ba50cf9e5b104098947289d64ecaf"} Jan 22 11:38:22 crc kubenswrapper[4975]: I0122 11:38:22.544998 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8twjj/crc-debug-ccxrw"] Jan 22 11:38:22 crc kubenswrapper[4975]: I0122 11:38:22.556155 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8twjj/crc-debug-ccxrw"] Jan 22 11:38:22 crc kubenswrapper[4975]: I0122 11:38:22.756436 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:38:22 crc kubenswrapper[4975]: E0122 11:38:22.757227 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.189123 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.356794 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfzp\" (UniqueName: \"kubernetes.io/projected/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-kube-api-access-2rfzp\") pod \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.356934 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-host\") pod \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\" (UID: \"20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9\") " Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.357458 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-host" (OuterVolumeSpecName: "host") pod "20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" (UID: "20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.357922 4975 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-host\") on node \"crc\" DevicePath \"\"" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.363727 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-kube-api-access-2rfzp" (OuterVolumeSpecName: "kube-api-access-2rfzp") pod "20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" (UID: "20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9"). InnerVolumeSpecName "kube-api-access-2rfzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.460142 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfzp\" (UniqueName: \"kubernetes.io/projected/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9-kube-api-access-2rfzp\") on node \"crc\" DevicePath \"\"" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.710778 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8twjj/crc-debug-qppcj"] Jan 22 11:38:23 crc kubenswrapper[4975]: E0122 11:38:23.712409 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" containerName="container-00" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.712494 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" containerName="container-00" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.712749 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" containerName="container-00" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.713436 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.766269 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9" path="/var/lib/kubelet/pods/20d7ff94-a2d0-4b38-93aa-a0b6e6218ff9/volumes" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.866966 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvw9n\" (UniqueName: \"kubernetes.io/projected/f8e564d7-61b3-44e0-8e7b-05d192f30067-kube-api-access-nvw9n\") pod \"crc-debug-qppcj\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.867744 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8e564d7-61b3-44e0-8e7b-05d192f30067-host\") pod \"crc-debug-qppcj\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.969732 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8e564d7-61b3-44e0-8e7b-05d192f30067-host\") pod \"crc-debug-qppcj\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.969836 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvw9n\" (UniqueName: \"kubernetes.io/projected/f8e564d7-61b3-44e0-8e7b-05d192f30067-kube-api-access-nvw9n\") pod \"crc-debug-qppcj\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.970214 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8e564d7-61b3-44e0-8e7b-05d192f30067-host\") pod \"crc-debug-qppcj\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:23 crc kubenswrapper[4975]: I0122 11:38:23.996044 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvw9n\" (UniqueName: \"kubernetes.io/projected/f8e564d7-61b3-44e0-8e7b-05d192f30067-kube-api-access-nvw9n\") pod \"crc-debug-qppcj\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:24 crc kubenswrapper[4975]: I0122 11:38:24.032244 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:24 crc kubenswrapper[4975]: W0122 11:38:24.075103 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e564d7_61b3_44e0_8e7b_05d192f30067.slice/crio-f342920749f9b8b74effcc02d7e54c486a88af5ca16fb23e280597d390ef1730 WatchSource:0}: Error finding container f342920749f9b8b74effcc02d7e54c486a88af5ca16fb23e280597d390ef1730: Status 404 returned error can't find the container with id f342920749f9b8b74effcc02d7e54c486a88af5ca16fb23e280597d390ef1730 Jan 22 11:38:24 crc kubenswrapper[4975]: I0122 11:38:24.088997 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-qppcj" event={"ID":"f8e564d7-61b3-44e0-8e7b-05d192f30067","Type":"ContainerStarted","Data":"f342920749f9b8b74effcc02d7e54c486a88af5ca16fb23e280597d390ef1730"} Jan 22 11:38:24 crc kubenswrapper[4975]: I0122 11:38:24.090801 4975 scope.go:117] "RemoveContainer" containerID="4c7911fafdf88449cbe35bc75d5e83d0ffafe49892692640756b59ac994a1af4" Jan 22 11:38:24 crc kubenswrapper[4975]: I0122 11:38:24.090830 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-ccxrw" Jan 22 11:38:25 crc kubenswrapper[4975]: I0122 11:38:25.098931 4975 generic.go:334] "Generic (PLEG): container finished" podID="f8e564d7-61b3-44e0-8e7b-05d192f30067" containerID="638d3b726bb1fdd81a64748dc35f8b8da1c404ef3269c23d0fd1bce6d532f03a" exitCode=0 Jan 22 11:38:25 crc kubenswrapper[4975]: I0122 11:38:25.099233 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/crc-debug-qppcj" event={"ID":"f8e564d7-61b3-44e0-8e7b-05d192f30067","Type":"ContainerDied","Data":"638d3b726bb1fdd81a64748dc35f8b8da1c404ef3269c23d0fd1bce6d532f03a"} Jan 22 11:38:25 crc kubenswrapper[4975]: I0122 11:38:25.139480 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8twjj/crc-debug-qppcj"] Jan 22 11:38:25 crc kubenswrapper[4975]: I0122 11:38:25.149407 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8twjj/crc-debug-qppcj"] Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.213903 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.344221 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvw9n\" (UniqueName: \"kubernetes.io/projected/f8e564d7-61b3-44e0-8e7b-05d192f30067-kube-api-access-nvw9n\") pod \"f8e564d7-61b3-44e0-8e7b-05d192f30067\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.344318 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8e564d7-61b3-44e0-8e7b-05d192f30067-host\") pod \"f8e564d7-61b3-44e0-8e7b-05d192f30067\" (UID: \"f8e564d7-61b3-44e0-8e7b-05d192f30067\") " Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.344455 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8e564d7-61b3-44e0-8e7b-05d192f30067-host" (OuterVolumeSpecName: "host") pod "f8e564d7-61b3-44e0-8e7b-05d192f30067" (UID: "f8e564d7-61b3-44e0-8e7b-05d192f30067"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.344858 4975 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8e564d7-61b3-44e0-8e7b-05d192f30067-host\") on node \"crc\" DevicePath \"\"" Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.351570 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e564d7-61b3-44e0-8e7b-05d192f30067-kube-api-access-nvw9n" (OuterVolumeSpecName: "kube-api-access-nvw9n") pod "f8e564d7-61b3-44e0-8e7b-05d192f30067" (UID: "f8e564d7-61b3-44e0-8e7b-05d192f30067"). InnerVolumeSpecName "kube-api-access-nvw9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:38:26 crc kubenswrapper[4975]: I0122 11:38:26.447567 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvw9n\" (UniqueName: \"kubernetes.io/projected/f8e564d7-61b3-44e0-8e7b-05d192f30067-kube-api-access-nvw9n\") on node \"crc\" DevicePath \"\"" Jan 22 11:38:27 crc kubenswrapper[4975]: I0122 11:38:27.119144 4975 scope.go:117] "RemoveContainer" containerID="638d3b726bb1fdd81a64748dc35f8b8da1c404ef3269c23d0fd1bce6d532f03a" Jan 22 11:38:27 crc kubenswrapper[4975]: I0122 11:38:27.119582 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/crc-debug-qppcj" Jan 22 11:38:27 crc kubenswrapper[4975]: I0122 11:38:27.786989 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e564d7-61b3-44e0-8e7b-05d192f30067" path="/var/lib/kubelet/pods/f8e564d7-61b3-44e0-8e7b-05d192f30067/volumes" Jan 22 11:38:35 crc kubenswrapper[4975]: I0122 11:38:35.755370 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:38:35 crc kubenswrapper[4975]: E0122 11:38:35.756189 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.243090 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d584b4fcd-5llj7_fa760241-75cd-4e21-8b47-5e367c1cff52/barbican-api/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.418938 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d584b4fcd-5llj7_fa760241-75cd-4e21-8b47-5e367c1cff52/barbican-api-log/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.466484 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749cddb87d-66769_d7922a3a-ab4e-4400-9da2-b01efcf11ca7/barbican-keystone-listener/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.544293 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749cddb87d-66769_d7922a3a-ab4e-4400-9da2-b01efcf11ca7/barbican-keystone-listener-log/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.660699 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f5cf4887-x2cb8_4c2be393-e3cf-4356-9f2e-6334f7ee99cc/barbican-worker/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.681444 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f5cf4887-x2cb8_4c2be393-e3cf-4356-9f2e-6334f7ee99cc/barbican-worker-log/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.866320 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/ceilometer-central-agent/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.911127 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c_3e928e99-1e6c-4d3e-a7f3-bd725e29c415/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:40 crc kubenswrapper[4975]: I0122 11:38:40.993782 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/ceilometer-notification-agent/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.052038 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/proxy-httpd/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.069101 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/sg-core/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.240074 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b0e51991-42f2-4b42-a988-73e2dd891b4d/cinder-api/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.288479 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b0e51991-42f2-4b42-a988-73e2dd891b4d/cinder-api-log/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.389216 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0be551d1-c8bf-440d-b0af-1b40cb1d7e8a/cinder-scheduler/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.443970 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0be551d1-c8bf-440d-b0af-1b40cb1d7e8a/probe/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.528074 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-d5spx_a3494e98-461d-491b-96c0-05c360412e74/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.696651 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m82g6_fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.812442 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-j5kkw_2266f8c0-fa93-40b2-994b-90c948a2a2f8/init/0.log" Jan 22 11:38:41 crc kubenswrapper[4975]: I0122 11:38:41.923222 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-j5kkw_2266f8c0-fa93-40b2-994b-90c948a2a2f8/init/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.017377 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-j5kkw_2266f8c0-fa93-40b2-994b-90c948a2a2f8/dnsmasq-dns/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.048237 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-l7t47_a2579a01-95bb-4c7c-8bf5-c0c81586e31d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.214145 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e68830c0-793a-4984-9a65-427420074a97/glance-log/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.254736 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e68830c0-793a-4984-9a65-427420074a97/glance-httpd/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.342474 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_347d672f-7247-4f3d-89dd-f57fa3e0c628/glance-httpd/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.397775 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_347d672f-7247-4f3d-89dd-f57fa3e0c628/glance-log/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.656042 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cb95f4cc6-jdv4m_d2027dc6-aa67-467c-ad8d-e31ccb6d66ee/horizon/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.800299 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v_2235b954-42a4-4665-a60a-e9f4e49b3e45/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:42 crc kubenswrapper[4975]: I0122 11:38:42.921770 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cb95f4cc6-jdv4m_d2027dc6-aa67-467c-ad8d-e31ccb6d66ee/horizon-log/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.013555 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tzwgz_3b6f2238-afde-4cfc-83aa-c5d56a2afeb4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.225646 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484661-rhjvs_e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f/keystone-cron/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.329285 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79786fd675-77kk9_a3a8b4a6-2a15-4752-918f-066167d8699d/keystone-api/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.427169 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9f501c78-c779-4060-9262-dbbe23739de6/kube-state-metrics/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.499000 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-w6n88_f26c2f80-766b-4f5c-9c50-13fba04daff7/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.846912 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54575f849-qtpk8_17be82a9-4845-4b3d-85ca-82489d4687b2/neutron-httpd/0.log" Jan 22 11:38:43 crc kubenswrapper[4975]: I0122 11:38:43.903672 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54575f849-qtpk8_17be82a9-4845-4b3d-85ca-82489d4687b2/neutron-api/0.log" Jan 22 11:38:44 crc kubenswrapper[4975]: I0122 11:38:44.024948 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2_ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:44 crc kubenswrapper[4975]: I0122 11:38:44.558866 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9e829dfc-69fb-4951-b203-53af6945a1e1/nova-cell0-conductor-conductor/0.log" Jan 22 11:38:44 crc kubenswrapper[4975]: I0122 11:38:44.584831 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff6252b0-1c69-4710-9fd2-ab50960f3495/nova-api-log/0.log" Jan 22 11:38:44 crc kubenswrapper[4975]: I0122 11:38:44.990713 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff6252b0-1c69-4710-9fd2-ab50960f3495/nova-api-api/0.log" Jan 22 11:38:45 crc kubenswrapper[4975]: I0122 11:38:45.003467 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c07c882-c139-44d5-be3c-79a3a02cab8a/nova-cell1-conductor-conductor/0.log" Jan 22 11:38:45 crc kubenswrapper[4975]: I0122 11:38:45.059461 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 11:38:45 crc kubenswrapper[4975]: I0122 11:38:45.293205 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-c25vl_f8154d48-3ee4-45f3-bf77-45befe6cdc3a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:45 crc kubenswrapper[4975]: I0122 11:38:45.481182 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_983309a9-5940-4b5f-b84d-a219bbe96d93/nova-metadata-log/0.log" Jan 22 11:38:45 crc kubenswrapper[4975]: I0122 11:38:45.733796 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_86380f07-beeb-467d-aabd-0927af769a35/nova-scheduler-scheduler/0.log" Jan 22 11:38:45 crc kubenswrapper[4975]: I0122 11:38:45.788676 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_621048f0-ccdc-4953-97d4-ba5286ff3345/mysql-bootstrap/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.195462 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_621048f0-ccdc-4953-97d4-ba5286ff3345/mysql-bootstrap/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.205525 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_621048f0-ccdc-4953-97d4-ba5286ff3345/galera/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.373681 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f6d809d7-faf1-4033-a986-f49f903d36f1/mysql-bootstrap/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.654608 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f6d809d7-faf1-4033-a986-f49f903d36f1/mysql-bootstrap/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.683798 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f6d809d7-faf1-4033-a986-f49f903d36f1/galera/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.790033 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_983309a9-5940-4b5f-b84d-a219bbe96d93/nova-metadata-metadata/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.869466 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1217b84a-f2e0-49f9-a237-30f9493f1ca4/openstackclient/0.log" Jan 22 11:38:46 crc kubenswrapper[4975]: I0122 11:38:46.997630 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kkcjz_b366080b-ad75-4205-9257-9d8a8ac5aed8/openstack-network-exporter/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.113044 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mj5f2_47de946e-512c-49a6-b633-c3c1640fb4bb/ovn-controller/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.289057 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovsdb-server-init/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.511594 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovsdb-server/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.512187 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovsdb-server-init/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.529535 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovs-vswitchd/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.757313 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dg2vt_0b6be64b-5e04-4797-8617-681e72ebb9d7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.793885 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_13d61e05-db24-4d01-bfc6-5ff85c254d7c/openstack-network-exporter/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.840816 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_13d61e05-db24-4d01-bfc6-5ff85c254d7c/ovn-northd/0.log" Jan 22 11:38:47 crc kubenswrapper[4975]: I0122 11:38:47.956639 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ce5bdc2-44a6-4071-8df7-736a1992fee0/openstack-network-exporter/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.050688 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ce5bdc2-44a6-4071-8df7-736a1992fee0/ovsdbserver-nb/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.188813 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a50b6032-49e5-467e-9728-b4c43293dbcf/openstack-network-exporter/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.355185 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a50b6032-49e5-467e-9728-b4c43293dbcf/ovsdbserver-sb/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.441003 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bb784b78-f8g8c_3dbff787-f548-4837-acb9-e1538c513330/placement-api/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.518009 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bb784b78-f8g8c_3dbff787-f548-4837-acb9-e1538c513330/placement-log/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.598855 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6110323c-c3aa-4d7b-a201-20d85afa29d3/setup-container/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.756815 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:38:48 crc kubenswrapper[4975]: E0122 11:38:48.757120 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.817873 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f2d061f-103c-4b8a-aa45-3da02ffebff5/setup-container/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.862090 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6110323c-c3aa-4d7b-a201-20d85afa29d3/setup-container/0.log" Jan 22 11:38:48 crc kubenswrapper[4975]: I0122 11:38:48.862951 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6110323c-c3aa-4d7b-a201-20d85afa29d3/rabbitmq/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.248061 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f2d061f-103c-4b8a-aa45-3da02ffebff5/setup-container/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.303151 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f2d061f-103c-4b8a-aa45-3da02ffebff5/rabbitmq/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.308297 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm_9f5109b8-3671-4f96-8d9f-625b4c02b1b6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.479959 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8v6sw_d6c907cc-684d-40a8-905a-977845138321/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.519452 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p_f15014da-c77d-4583-9547-dceec5d6628a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.721461 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-flsmq_36d854d2-b0f2-4864-b614-715d94ba5631/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:49 crc kubenswrapper[4975]: I0122 11:38:49.829870 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fs85s_f3c65871-e82d-4687-99c7-4af5194d7c98/ssh-known-hosts-edpm-deployment/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.047718 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78ffc5fbc9-46msq_91f6b2db-2af6-4817-a5cc-9c04fe10070f/proxy-server/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.049348 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78ffc5fbc9-46msq_91f6b2db-2af6-4817-a5cc-9c04fe10070f/proxy-httpd/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.218365 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7kqbs_ee721893-cf9b-4712-b421-19d6a59d7cf7/swift-ring-rebalance/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.289358 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-auditor/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.380076 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-reaper/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.440990 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-replicator/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.506454 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-server/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.580435 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-auditor/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.618442 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-replicator/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.630956 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-server/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.732835 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-updater/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.788282 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-auditor/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.808745 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-expirer/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.861064 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-replicator/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.944591 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-server/0.log" Jan 22 11:38:50 crc kubenswrapper[4975]: I0122 11:38:50.972928 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-updater/0.log" Jan 22 11:38:51 crc kubenswrapper[4975]: I0122 11:38:51.022715 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/rsync/0.log" Jan 22 11:38:51 crc kubenswrapper[4975]: I0122 11:38:51.101072 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/swift-recon-cron/0.log" Jan 22 11:38:51 crc kubenswrapper[4975]: I0122 11:38:51.335439 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j5frs_89ca7561-30de-4093-8f2f-09455a740f94/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:51 crc kubenswrapper[4975]: I0122 11:38:51.412569 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2401e7da-ce1d-45c2-b62e-bd617338ac17/tempest-tests-tempest-tests-runner/0.log" Jan 22 11:38:51 crc kubenswrapper[4975]: I0122 11:38:51.592975 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3a80b919-f543-4c35-8432-79eb4b8c9c2a/test-operator-logs-container/0.log" Jan 22 11:38:51 crc kubenswrapper[4975]: I0122 11:38:51.635879 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bk27r_1f779aa8-c70d-46ba-a278-b4062a5ddd01/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:38:59 crc kubenswrapper[4975]: I0122 11:38:59.807134 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5525f79d-14ec-480a-b69f-7c003fdc6dba/memcached/0.log" Jan 22 11:39:03 crc kubenswrapper[4975]: I0122 11:39:03.754262 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:39:03 crc kubenswrapper[4975]: E0122 11:39:03.755118 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:39:14 crc kubenswrapper[4975]: I0122 11:39:14.755771 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:39:14 crc kubenswrapper[4975]: E0122 11:39:14.756684 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.079915 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/util/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.257610 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/pull/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.263997 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/util/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.337188 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/pull/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.467860 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/pull/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.470268 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/util/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.517815 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/extract/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.722760 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-wsw9j_1cb3165d-a089-4c3d-9d70-73145248cd5c/manager/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.847020 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-5w2lh_f1fa565f-67df-4a8f-9f10-33397d530bbb/manager/0.log" Jan 22 11:39:17 crc kubenswrapper[4975]: I0122 11:39:17.929722 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6dln8_17061bf9-682c-4294-950a-18a5c5a00577/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.100083 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-hgw2h_9b23e2d7-2430-4770-bb64-b2d8a14c2f7c/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.137196 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-kfvm6_e8365e14-67ba-495f-a996-a2b63c7a391e/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.279146 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g49ht_96fb0fc0-a7b6-4607-b879-9bfb9f318742/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.492834 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dm74c_962350e2-bd43-433c-af57-8094db573873/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.557070 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-s5qw6_fe34d1a8-dae0-4b05-b51f-9f577a2ff8de/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.735244 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-jk546_f1fc9c22-ce13-4bde-8901-08225613757a/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.759237 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-48bd6_ff361e81-48c6-4926-84d9-fcca6fba71f6/manager/0.log" Jan 22 11:39:18 crc kubenswrapper[4975]: I0122 11:39:18.957135 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-qcx4k_a600298c-821b-47b5-9018-8d7ef15267c8/manager/0.log" Jan 22 11:39:19 crc kubenswrapper[4975]: I0122 11:39:19.023180 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-27zdj_aa6e0f83-d561-4537-934c-54373200aa1f/manager/0.log" Jan 22 11:39:19 crc kubenswrapper[4975]: I0122 11:39:19.209455 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-s64bc_9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62/manager/0.log" Jan 22 11:39:19 crc kubenswrapper[4975]: I0122 11:39:19.245413 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4jt9j_3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb/manager/0.log" Jan 22 11:39:19 crc kubenswrapper[4975]: I0122 11:39:19.462818 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7_dc8e73d1-2925-43ef-a854-eb93ead085d5/manager/0.log" Jan 22 11:39:19 crc kubenswrapper[4975]: I0122 11:39:19.727035 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-9b49577dc-tm6xf_0c16cec1-1906-45c5-83be-5b14f7e44321/operator/0.log" Jan 22 11:39:20 crc kubenswrapper[4975]: I0122 11:39:20.019159 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rvw6h_8f4b0804-c42b-4aad-bb37-d55746bcffc6/registry-server/0.log" Jan 22 11:39:20 crc kubenswrapper[4975]: I0122 11:39:20.098311 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-wmzml_834b49cf-0565-4e20-b598-7d01ea144148/manager/0.log" Jan 22 11:39:20 crc kubenswrapper[4975]: I0122 11:39:20.269151 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-pspwb_177b52ae-55bc-40d1-badb-253927400560/manager/0.log" Jan 22 11:39:20 crc kubenswrapper[4975]: I0122 11:39:20.560672 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kqh4f_d7834e51-fb9e-4f97-a385-dd98869ece3b/operator/0.log" Jan 22 11:39:20 crc kubenswrapper[4975]: I0122 11:39:20.684036 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-x2kk6_ea739801-0b35-4cd3-a198-a58e93059f88/manager/0.log" Jan 22 11:39:20 crc kubenswrapper[4975]: I0122 11:39:20.836895 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-fcdd7b6c4-lq88q_2203164e-ced5-46bd-85c1-92eb94c7fb9f/manager/0.log" Jan 22 11:39:21 crc kubenswrapper[4975]: I0122 11:39:21.199936 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-l8cz4_c6673acf-6a08-449e-b67a-e1245dbe2206/manager/0.log" Jan 22 11:39:21 crc kubenswrapper[4975]: I0122 11:39:21.244121 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-fpxsm_17acd27c-28fa-4f87-a3e3-c0c63adeeb9a/manager/0.log" Jan 22 11:39:21 crc kubenswrapper[4975]: I0122 11:39:21.432353 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-ddrzg_88fd0de4-4508-450b-97b4-3206c50fb0a2/manager/0.log" Jan 22 11:39:28 crc kubenswrapper[4975]: I0122 11:39:28.755725 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:39:28 crc kubenswrapper[4975]: E0122 11:39:28.756576 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:39:40 crc kubenswrapper[4975]: I0122 11:39:40.521631 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d6l9z_5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f/control-plane-machine-set-operator/0.log" Jan 22 11:39:40 crc kubenswrapper[4975]: I0122 11:39:40.719079 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prtg7_af2b3377-4d92-4b44-bee1-338c9156c095/kube-rbac-proxy/0.log" Jan 22 11:39:40 crc kubenswrapper[4975]: I0122 11:39:40.773212 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prtg7_af2b3377-4d92-4b44-bee1-338c9156c095/machine-api-operator/0.log" Jan 22 11:39:41 crc kubenswrapper[4975]: I0122 11:39:41.758816 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:39:41 crc kubenswrapper[4975]: E0122 11:39:41.759429 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:39:52 crc kubenswrapper[4975]: I0122 11:39:52.755670 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:39:52 crc kubenswrapper[4975]: E0122 11:39:52.756312 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:39:54 crc kubenswrapper[4975]: I0122 11:39:54.241656 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-l5qtb_961f5491-5692-4052-a7de-b05dcf68274b/cert-manager-controller/0.log" Jan 22 11:39:54 crc kubenswrapper[4975]: I0122 11:39:54.422147 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-4hxcx_dd1688d7-4085-4be3-9d16-3d212a30c5cb/cert-manager-cainjector/0.log" Jan 22 11:39:54 crc kubenswrapper[4975]: I0122 11:39:54.491079 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qlcb7_1c1a204e-dbad-4479-854d-98812e592f54/cert-manager-webhook/0.log" Jan 22 11:40:03 crc kubenswrapper[4975]: I0122 11:40:03.754646 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:40:03 crc kubenswrapper[4975]: E0122 11:40:03.755323 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:40:08 crc kubenswrapper[4975]: I0122 11:40:08.003283 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7z2df_8f35ea3f-ed83-4201-a249-fc379ee0951c/nmstate-console-plugin/0.log" Jan 22 11:40:08 crc kubenswrapper[4975]: I0122 11:40:08.166197 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mjdwp_0f5b8079-630d-42c9-be23-abb29c55fcc8/nmstate-handler/0.log" Jan 22 11:40:08 crc kubenswrapper[4975]: I0122 11:40:08.226894 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hqp5c_7bcd90cd-87a3-4def-bff8-b31659a68fd2/kube-rbac-proxy/0.log" Jan 22 11:40:08 crc kubenswrapper[4975]: I0122 11:40:08.288501 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hqp5c_7bcd90cd-87a3-4def-bff8-b31659a68fd2/nmstate-metrics/0.log" Jan 22 11:40:08 crc kubenswrapper[4975]: I0122 11:40:08.410318 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-d8nq2_1d79f2dc-129c-4d57-94d5-3e2b73dad858/nmstate-operator/0.log" Jan 22 11:40:08 crc kubenswrapper[4975]: I0122 11:40:08.474949 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-r9mqr_d10d736d-0f74-4f33-bcbc-31d9a55f3370/nmstate-webhook/0.log" Jan 22 11:40:14 crc kubenswrapper[4975]: I0122 11:40:14.755500 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:40:14 crc kubenswrapper[4975]: E0122 11:40:14.756698 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:40:26 crc kubenswrapper[4975]: I0122 11:40:26.756023 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:40:26 crc kubenswrapper[4975]: E0122 11:40:26.756886 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:40:37 crc kubenswrapper[4975]: I0122 11:40:37.692518 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d2mbh_88d55612-c72e-4893-8ef2-a524fa613e7a/kube-rbac-proxy/0.log" Jan 22 11:40:37 crc kubenswrapper[4975]: I0122 11:40:37.770052 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d2mbh_88d55612-c72e-4893-8ef2-a524fa613e7a/controller/0.log" Jan 22 11:40:37 crc kubenswrapper[4975]: I0122 11:40:37.903802 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.093319 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.132247 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.137263 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.215029 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.342226 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.378067 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.382440 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.398950 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.568777 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.571939 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.576301 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.591121 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/controller/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.783046 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/kube-rbac-proxy/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.809419 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/frr-metrics/0.log" Jan 22 11:40:38 crc kubenswrapper[4975]: I0122 11:40:38.868817 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/kube-rbac-proxy-frr/0.log" Jan 22 11:40:39 crc kubenswrapper[4975]: I0122 11:40:39.015269 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/reloader/0.log" Jan 22 11:40:39 crc kubenswrapper[4975]: I0122 11:40:39.070959 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6tqxg_b0d54a39-8b95-459f-a047-244ed0e3a156/frr-k8s-webhook-server/0.log" Jan 22 11:40:39 crc kubenswrapper[4975]: I0122 11:40:39.323054 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b68b5dc66-l54nz_3c3466c9-e4db-41b6-90c7-132cf6c0dae7/manager/0.log" Jan 22 11:40:39 crc kubenswrapper[4975]: I0122 11:40:39.726620 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b6ddfffdc-46wrh_c6b7738a-4a9e-4f09-a3a0-5a413eed76b9/webhook-server/0.log" Jan 22 11:40:39 crc kubenswrapper[4975]: I0122 11:40:39.800450 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/frr/0.log" Jan 22 11:40:39 crc kubenswrapper[4975]: I0122 11:40:39.847814 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2knnf_1b770e2d-a830-4d27-8567-52246e202aa3/kube-rbac-proxy/0.log" Jan 22 11:40:40 crc kubenswrapper[4975]: I0122 11:40:40.188353 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2knnf_1b770e2d-a830-4d27-8567-52246e202aa3/speaker/0.log" Jan 22 11:40:41 crc kubenswrapper[4975]: I0122 11:40:41.754574 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:40:41 crc kubenswrapper[4975]: E0122 11:40:41.755020 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:40:52 crc kubenswrapper[4975]: I0122 11:40:52.755977 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:40:52 crc kubenswrapper[4975]: E0122 11:40:52.756825 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.360624 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/util/0.log" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.628313 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/pull/0.log" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.645548 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/util/0.log" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.646130 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/pull/0.log" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.894411 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/extract/0.log" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.899364 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/pull/0.log" Jan 22 11:40:54 crc kubenswrapper[4975]: I0122 11:40:54.918357 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/util/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.068448 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/util/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.231844 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/util/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.244232 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/pull/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.267440 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/pull/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.404837 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/pull/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.420410 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/util/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.425026 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/extract/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.604490 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-utilities/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.744730 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-content/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.788881 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-utilities/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.790840 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-content/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.909857 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-utilities/0.log" Jan 22 11:40:55 crc kubenswrapper[4975]: I0122 11:40:55.942783 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-content/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.115074 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-utilities/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.322626 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-utilities/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.347204 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-content/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.350961 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-content/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.366704 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/registry-server/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.535064 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-content/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.539248 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-utilities/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.820751 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pkfxd_6cc0ba15-cad5-416b-8428-6614ca1ebbec/marketplace-operator/0.log" Jan 22 11:40:56 crc kubenswrapper[4975]: I0122 11:40:56.820912 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-utilities/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.013704 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-content/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.049079 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-utilities/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.054610 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/registry-server/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.116286 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-content/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.212032 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-utilities/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.217890 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-content/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.341409 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/registry-server/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.466071 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-utilities/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.584881 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-utilities/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.626610 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-content/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.638150 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-content/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.824168 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-content/0.log" Jan 22 11:40:57 crc kubenswrapper[4975]: I0122 11:40:57.832445 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-utilities/0.log" Jan 22 11:40:58 crc kubenswrapper[4975]: I0122 11:40:58.207453 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/registry-server/0.log" Jan 22 11:41:07 crc kubenswrapper[4975]: I0122 11:41:07.763731 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:41:07 crc kubenswrapper[4975]: E0122 11:41:07.764640 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:41:18 crc kubenswrapper[4975]: I0122 11:41:18.755774 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:41:18 crc kubenswrapper[4975]: E0122 11:41:18.759880 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:41:20 crc kubenswrapper[4975]: E0122 11:41:20.995274 4975 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:49450->38.102.83.51:37165: write tcp 38.102.83.51:49450->38.102.83.51:37165: write: broken pipe Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.199509 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kcjq8"] Jan 22 11:41:23 crc kubenswrapper[4975]: E0122 11:41:23.200303 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e564d7-61b3-44e0-8e7b-05d192f30067" containerName="container-00" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.200314 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e564d7-61b3-44e0-8e7b-05d192f30067" containerName="container-00" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.200527 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e564d7-61b3-44e0-8e7b-05d192f30067" containerName="container-00" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.202155 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.209062 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcjq8"] Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.322228 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-catalog-content\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.322514 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-utilities\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.322618 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5nt\" (UniqueName: \"kubernetes.io/projected/59d713da-a469-446c-bf1d-01694a762411-kube-api-access-wv5nt\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.424628 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-catalog-content\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.424976 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-utilities\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.424993 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5nt\" (UniqueName: \"kubernetes.io/projected/59d713da-a469-446c-bf1d-01694a762411-kube-api-access-wv5nt\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.425536 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-catalog-content\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.425602 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-utilities\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.446005 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5nt\" (UniqueName: \"kubernetes.io/projected/59d713da-a469-446c-bf1d-01694a762411-kube-api-access-wv5nt\") pod \"redhat-operators-kcjq8\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:23 crc kubenswrapper[4975]: I0122 11:41:23.520764 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:24 crc kubenswrapper[4975]: I0122 11:41:24.112778 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcjq8"] Jan 22 11:41:24 crc kubenswrapper[4975]: I0122 11:41:24.798229 4975 generic.go:334] "Generic (PLEG): container finished" podID="59d713da-a469-446c-bf1d-01694a762411" containerID="2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5" exitCode=0 Jan 22 11:41:24 crc kubenswrapper[4975]: I0122 11:41:24.798285 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerDied","Data":"2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5"} Jan 22 11:41:24 crc kubenswrapper[4975]: I0122 11:41:24.798577 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerStarted","Data":"535bede919b7a8979e9395e5b6360dcdba9b5395b0698ddafe1f28593f3cba12"} Jan 22 11:41:24 crc kubenswrapper[4975]: I0122 11:41:24.800085 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:41:25 crc kubenswrapper[4975]: I0122 11:41:25.807765 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerStarted","Data":"78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70"} Jan 22 11:41:28 crc kubenswrapper[4975]: I0122 11:41:28.837556 4975 generic.go:334] "Generic (PLEG): container finished" podID="59d713da-a469-446c-bf1d-01694a762411" containerID="78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70" exitCode=0 Jan 22 11:41:28 crc kubenswrapper[4975]: I0122 11:41:28.837734 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerDied","Data":"78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70"} Jan 22 11:41:29 crc kubenswrapper[4975]: I0122 11:41:29.847597 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerStarted","Data":"10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08"} Jan 22 11:41:29 crc kubenswrapper[4975]: I0122 11:41:29.870447 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kcjq8" podStartSLOduration=2.405716636 podStartE2EDuration="6.870430288s" podCreationTimestamp="2026-01-22 11:41:23 +0000 UTC" firstStartedPulling="2026-01-22 11:41:24.799831739 +0000 UTC m=+3877.457867849" lastFinishedPulling="2026-01-22 11:41:29.264545381 +0000 UTC m=+3881.922581501" observedRunningTime="2026-01-22 11:41:29.868546636 +0000 UTC m=+3882.526582746" watchObservedRunningTime="2026-01-22 11:41:29.870430288 +0000 UTC m=+3882.528466398" Jan 22 11:41:30 crc kubenswrapper[4975]: I0122 11:41:30.755365 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:41:30 crc kubenswrapper[4975]: E0122 11:41:30.755636 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:41:33 crc kubenswrapper[4975]: I0122 11:41:33.521494 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:33 crc kubenswrapper[4975]: I0122 11:41:33.522317 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:34 crc kubenswrapper[4975]: I0122 11:41:34.601785 4975 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kcjq8" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="registry-server" probeResult="failure" output=< Jan 22 11:41:34 crc kubenswrapper[4975]: timeout: failed to connect service ":50051" within 1s Jan 22 11:41:34 crc kubenswrapper[4975]: > Jan 22 11:41:43 crc kubenswrapper[4975]: I0122 11:41:43.581427 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:43 crc kubenswrapper[4975]: I0122 11:41:43.632690 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:43 crc kubenswrapper[4975]: I0122 11:41:43.822930 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcjq8"] Jan 22 11:41:44 crc kubenswrapper[4975]: I0122 11:41:44.755351 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:41:44 crc kubenswrapper[4975]: E0122 11:41:44.755688 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:41:44 crc kubenswrapper[4975]: I0122 11:41:44.993604 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kcjq8" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="registry-server" containerID="cri-o://10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08" gracePeriod=2 Jan 22 11:41:45 crc kubenswrapper[4975]: I0122 11:41:45.965139 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.010989 4975 generic.go:334] "Generic (PLEG): container finished" podID="59d713da-a469-446c-bf1d-01694a762411" containerID="10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08" exitCode=0 Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.011042 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerDied","Data":"10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08"} Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.011056 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcjq8" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.011074 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcjq8" event={"ID":"59d713da-a469-446c-bf1d-01694a762411","Type":"ContainerDied","Data":"535bede919b7a8979e9395e5b6360dcdba9b5395b0698ddafe1f28593f3cba12"} Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.011096 4975 scope.go:117] "RemoveContainer" containerID="10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.035007 4975 scope.go:117] "RemoveContainer" containerID="78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.056025 4975 scope.go:117] "RemoveContainer" containerID="2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.102301 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-utilities\") pod \"59d713da-a469-446c-bf1d-01694a762411\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.102567 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv5nt\" (UniqueName: \"kubernetes.io/projected/59d713da-a469-446c-bf1d-01694a762411-kube-api-access-wv5nt\") pod \"59d713da-a469-446c-bf1d-01694a762411\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.102609 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-catalog-content\") pod \"59d713da-a469-446c-bf1d-01694a762411\" (UID: \"59d713da-a469-446c-bf1d-01694a762411\") " Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.103937 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-utilities" (OuterVolumeSpecName: "utilities") pod "59d713da-a469-446c-bf1d-01694a762411" (UID: "59d713da-a469-446c-bf1d-01694a762411"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.109454 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d713da-a469-446c-bf1d-01694a762411-kube-api-access-wv5nt" (OuterVolumeSpecName: "kube-api-access-wv5nt") pod "59d713da-a469-446c-bf1d-01694a762411" (UID: "59d713da-a469-446c-bf1d-01694a762411"). InnerVolumeSpecName "kube-api-access-wv5nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.114669 4975 scope.go:117] "RemoveContainer" containerID="10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08" Jan 22 11:41:46 crc kubenswrapper[4975]: E0122 11:41:46.115397 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08\": container with ID starting with 10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08 not found: ID does not exist" containerID="10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.115439 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08"} err="failed to get container status \"10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08\": rpc error: code = NotFound desc = could not find container \"10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08\": container with ID starting with 10178fa1ff30823cc70e1f4751c0f1911883c4f286fc30f9429ab75d96ccab08 not found: ID does not exist" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.115481 4975 scope.go:117] "RemoveContainer" containerID="78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70" Jan 22 11:41:46 crc kubenswrapper[4975]: E0122 11:41:46.115826 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70\": container with ID starting with 78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70 not found: ID does not exist" containerID="78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.115871 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70"} err="failed to get container status \"78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70\": rpc error: code = NotFound desc = could not find container \"78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70\": container with ID starting with 78291eb145560b7aea39b4d5e52326157e4800acda17220508066e956478eb70 not found: ID does not exist" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.115906 4975 scope.go:117] "RemoveContainer" containerID="2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5" Jan 22 11:41:46 crc kubenswrapper[4975]: E0122 11:41:46.116309 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5\": container with ID starting with 2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5 not found: ID does not exist" containerID="2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.116419 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5"} err="failed to get container status \"2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5\": rpc error: code = NotFound desc = could not find container \"2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5\": container with ID starting with 2a5da845579065a9a5b0be9f1dd28fd77bb3fc13bf39f8d5011b2109418610f5 not found: ID does not exist" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.205014 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.205042 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv5nt\" (UniqueName: \"kubernetes.io/projected/59d713da-a469-446c-bf1d-01694a762411-kube-api-access-wv5nt\") on node \"crc\" DevicePath \"\"" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.263207 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59d713da-a469-446c-bf1d-01694a762411" (UID: "59d713da-a469-446c-bf1d-01694a762411"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.306782 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d713da-a469-446c-bf1d-01694a762411-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.354158 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcjq8"] Jan 22 11:41:46 crc kubenswrapper[4975]: I0122 11:41:46.361231 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kcjq8"] Jan 22 11:41:47 crc kubenswrapper[4975]: I0122 11:41:47.772214 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d713da-a469-446c-bf1d-01694a762411" path="/var/lib/kubelet/pods/59d713da-a469-446c-bf1d-01694a762411/volumes" Jan 22 11:41:56 crc kubenswrapper[4975]: I0122 11:41:56.755525 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:41:56 crc kubenswrapper[4975]: E0122 11:41:56.758199 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:42:10 crc kubenswrapper[4975]: I0122 11:42:10.755946 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:42:10 crc kubenswrapper[4975]: E0122 11:42:10.758913 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:42:22 crc kubenswrapper[4975]: I0122 11:42:22.755206 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:42:22 crc kubenswrapper[4975]: E0122 11:42:22.756042 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:42:34 crc kubenswrapper[4975]: I0122 11:42:34.225615 4975 scope.go:117] "RemoveContainer" containerID="dcbc75d13b7560d8d3c0790efed970fe52424bea859d14e67bc5f20a96699546" Jan 22 11:42:34 crc kubenswrapper[4975]: I0122 11:42:34.262714 4975 scope.go:117] "RemoveContainer" containerID="3fc442e0032d0d9d13463c05cc51696e09bf92ff56a47a34aa12dccd764500cb" Jan 22 11:42:34 crc kubenswrapper[4975]: I0122 11:42:34.308848 4975 scope.go:117] "RemoveContainer" containerID="27398635f21ad702c73a3e89f1a552f91f5633d982e0381670f25721db6a6186" Jan 22 11:42:36 crc kubenswrapper[4975]: I0122 11:42:36.755668 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:42:36 crc kubenswrapper[4975]: E0122 11:42:36.756367 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:42:44 crc kubenswrapper[4975]: I0122 11:42:44.576386 4975 generic.go:334] "Generic (PLEG): container finished" podID="e962ca51-fd5b-4010-a9ff-9448189876be" containerID="36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3" exitCode=0 Jan 22 11:42:44 crc kubenswrapper[4975]: I0122 11:42:44.576502 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8twjj/must-gather-r8vqm" event={"ID":"e962ca51-fd5b-4010-a9ff-9448189876be","Type":"ContainerDied","Data":"36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3"} Jan 22 11:42:44 crc kubenswrapper[4975]: I0122 11:42:44.577459 4975 scope.go:117] "RemoveContainer" containerID="36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3" Jan 22 11:42:45 crc kubenswrapper[4975]: I0122 11:42:45.427160 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8twjj_must-gather-r8vqm_e962ca51-fd5b-4010-a9ff-9448189876be/gather/0.log" Jan 22 11:42:51 crc kubenswrapper[4975]: I0122 11:42:51.755451 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:42:51 crc kubenswrapper[4975]: E0122 11:42:51.756405 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.061982 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8twjj/must-gather-r8vqm"] Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.064033 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8twjj/must-gather-r8vqm" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="copy" containerID="cri-o://0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521" gracePeriod=2 Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.074465 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8twjj/must-gather-r8vqm"] Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.545655 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8twjj_must-gather-r8vqm_e962ca51-fd5b-4010-a9ff-9448189876be/copy/0.log" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.546291 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.667628 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8twjj_must-gather-r8vqm_e962ca51-fd5b-4010-a9ff-9448189876be/copy/0.log" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.668271 4975 generic.go:334] "Generic (PLEG): container finished" podID="e962ca51-fd5b-4010-a9ff-9448189876be" containerID="0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521" exitCode=143 Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.668346 4975 scope.go:117] "RemoveContainer" containerID="0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.668439 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8twjj/must-gather-r8vqm" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.690271 4975 scope.go:117] "RemoveContainer" containerID="36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.727030 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e962ca51-fd5b-4010-a9ff-9448189876be-must-gather-output\") pod \"e962ca51-fd5b-4010-a9ff-9448189876be\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.727126 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdvx\" (UniqueName: \"kubernetes.io/projected/e962ca51-fd5b-4010-a9ff-9448189876be-kube-api-access-mkdvx\") pod \"e962ca51-fd5b-4010-a9ff-9448189876be\" (UID: \"e962ca51-fd5b-4010-a9ff-9448189876be\") " Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.734518 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e962ca51-fd5b-4010-a9ff-9448189876be-kube-api-access-mkdvx" (OuterVolumeSpecName: "kube-api-access-mkdvx") pod "e962ca51-fd5b-4010-a9ff-9448189876be" (UID: "e962ca51-fd5b-4010-a9ff-9448189876be"). InnerVolumeSpecName "kube-api-access-mkdvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.734622 4975 scope.go:117] "RemoveContainer" containerID="0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521" Jan 22 11:42:53 crc kubenswrapper[4975]: E0122 11:42:53.735057 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521\": container with ID starting with 0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521 not found: ID does not exist" containerID="0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.735100 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521"} err="failed to get container status \"0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521\": rpc error: code = NotFound desc = could not find container \"0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521\": container with ID starting with 0fc67c29a0f26950b1f94c8816597a94e7de2475efcc5e2d186e98773a4ba521 not found: ID does not exist" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.735125 4975 scope.go:117] "RemoveContainer" containerID="36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3" Jan 22 11:42:53 crc kubenswrapper[4975]: E0122 11:42:53.735413 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3\": container with ID starting with 36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3 not found: ID does not exist" containerID="36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.735434 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3"} err="failed to get container status \"36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3\": rpc error: code = NotFound desc = could not find container \"36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3\": container with ID starting with 36bd11d6a16a377c9cfee57c51df7393267e7044c5aee0107e09f7d46d3114a3 not found: ID does not exist" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.830418 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdvx\" (UniqueName: \"kubernetes.io/projected/e962ca51-fd5b-4010-a9ff-9448189876be-kube-api-access-mkdvx\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.888434 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e962ca51-fd5b-4010-a9ff-9448189876be-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e962ca51-fd5b-4010-a9ff-9448189876be" (UID: "e962ca51-fd5b-4010-a9ff-9448189876be"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:42:53 crc kubenswrapper[4975]: I0122 11:42:53.932021 4975 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e962ca51-fd5b-4010-a9ff-9448189876be-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:55 crc kubenswrapper[4975]: I0122 11:42:55.765961 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" path="/var/lib/kubelet/pods/e962ca51-fd5b-4010-a9ff-9448189876be/volumes" Jan 22 11:43:05 crc kubenswrapper[4975]: I0122 11:43:05.754984 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:43:06 crc kubenswrapper[4975]: I0122 11:43:06.798083 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"05b554e15b11f29874876619dcf16d3adf05366b2e8a168deeb6362afa5a541c"} Jan 22 11:44:34 crc kubenswrapper[4975]: I0122 11:44:34.404920 4975 scope.go:117] "RemoveContainer" containerID="092354d44768fb9f82b532849350dd7fd19ca0e36b70f1f33c676941ef6ab983" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.181779 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg"] Jan 22 11:45:00 crc kubenswrapper[4975]: E0122 11:45:00.183863 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="extract-utilities" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.183973 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="extract-utilities" Jan 22 11:45:00 crc kubenswrapper[4975]: E0122 11:45:00.184067 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="registry-server" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.184162 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="registry-server" Jan 22 11:45:00 crc kubenswrapper[4975]: E0122 11:45:00.184281 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="gather" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.184390 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="gather" Jan 22 11:45:00 crc kubenswrapper[4975]: E0122 11:45:00.184499 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="extract-content" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.184587 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="extract-content" Jan 22 11:45:00 crc kubenswrapper[4975]: E0122 11:45:00.184685 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="copy" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.184767 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="copy" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.185094 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="gather" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.185202 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="e962ca51-fd5b-4010-a9ff-9448189876be" containerName="copy" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.185303 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d713da-a469-446c-bf1d-01694a762411" containerName="registry-server" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.186178 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.191975 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.192264 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.192439 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg"] Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.315002 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhskg\" (UniqueName: \"kubernetes.io/projected/9dec7d13-4fd4-4724-92cb-0a2c16d58754-kube-api-access-fhskg\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.315346 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dec7d13-4fd4-4724-92cb-0a2c16d58754-secret-volume\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.315551 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dec7d13-4fd4-4724-92cb-0a2c16d58754-config-volume\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.417807 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhskg\" (UniqueName: \"kubernetes.io/projected/9dec7d13-4fd4-4724-92cb-0a2c16d58754-kube-api-access-fhskg\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.417918 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dec7d13-4fd4-4724-92cb-0a2c16d58754-secret-volume\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.417974 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dec7d13-4fd4-4724-92cb-0a2c16d58754-config-volume\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.418879 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dec7d13-4fd4-4724-92cb-0a2c16d58754-config-volume\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.429365 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dec7d13-4fd4-4724-92cb-0a2c16d58754-secret-volume\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.440080 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhskg\" (UniqueName: \"kubernetes.io/projected/9dec7d13-4fd4-4724-92cb-0a2c16d58754-kube-api-access-fhskg\") pod \"collect-profiles-29484705-8jqdg\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.504853 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:00 crc kubenswrapper[4975]: I0122 11:45:00.954068 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg"] Jan 22 11:45:01 crc kubenswrapper[4975]: I0122 11:45:01.915705 4975 generic.go:334] "Generic (PLEG): container finished" podID="9dec7d13-4fd4-4724-92cb-0a2c16d58754" containerID="dfb7a60fd1963f193a6918b61c4b7bdbde94a3df3a46c4c6f07271c3a44e18d9" exitCode=0 Jan 22 11:45:01 crc kubenswrapper[4975]: I0122 11:45:01.915757 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" event={"ID":"9dec7d13-4fd4-4724-92cb-0a2c16d58754","Type":"ContainerDied","Data":"dfb7a60fd1963f193a6918b61c4b7bdbde94a3df3a46c4c6f07271c3a44e18d9"} Jan 22 11:45:01 crc kubenswrapper[4975]: I0122 11:45:01.916027 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" event={"ID":"9dec7d13-4fd4-4724-92cb-0a2c16d58754","Type":"ContainerStarted","Data":"567bcf7e725e8fa16fc66d09d742ec93983060fcc924a23f7f679c4649e20793"} Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.303577 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.374525 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dec7d13-4fd4-4724-92cb-0a2c16d58754-config-volume\") pod \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.374711 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dec7d13-4fd4-4724-92cb-0a2c16d58754-secret-volume\") pod \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.374957 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhskg\" (UniqueName: \"kubernetes.io/projected/9dec7d13-4fd4-4724-92cb-0a2c16d58754-kube-api-access-fhskg\") pod \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\" (UID: \"9dec7d13-4fd4-4724-92cb-0a2c16d58754\") " Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.375406 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dec7d13-4fd4-4724-92cb-0a2c16d58754-config-volume" (OuterVolumeSpecName: "config-volume") pod "9dec7d13-4fd4-4724-92cb-0a2c16d58754" (UID: "9dec7d13-4fd4-4724-92cb-0a2c16d58754"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.381398 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dec7d13-4fd4-4724-92cb-0a2c16d58754-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9dec7d13-4fd4-4724-92cb-0a2c16d58754" (UID: "9dec7d13-4fd4-4724-92cb-0a2c16d58754"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.381697 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dec7d13-4fd4-4724-92cb-0a2c16d58754-kube-api-access-fhskg" (OuterVolumeSpecName: "kube-api-access-fhskg") pod "9dec7d13-4fd4-4724-92cb-0a2c16d58754" (UID: "9dec7d13-4fd4-4724-92cb-0a2c16d58754"). InnerVolumeSpecName "kube-api-access-fhskg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.477609 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhskg\" (UniqueName: \"kubernetes.io/projected/9dec7d13-4fd4-4724-92cb-0a2c16d58754-kube-api-access-fhskg\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.477646 4975 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dec7d13-4fd4-4724-92cb-0a2c16d58754-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.477655 4975 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dec7d13-4fd4-4724-92cb-0a2c16d58754-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.934753 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" event={"ID":"9dec7d13-4fd4-4724-92cb-0a2c16d58754","Type":"ContainerDied","Data":"567bcf7e725e8fa16fc66d09d742ec93983060fcc924a23f7f679c4649e20793"} Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.934792 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="567bcf7e725e8fa16fc66d09d742ec93983060fcc924a23f7f679c4649e20793" Jan 22 11:45:03 crc kubenswrapper[4975]: I0122 11:45:03.934850 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-8jqdg" Jan 22 11:45:04 crc kubenswrapper[4975]: I0122 11:45:04.380202 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr"] Jan 22 11:45:04 crc kubenswrapper[4975]: I0122 11:45:04.388297 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-vrhzr"] Jan 22 11:45:05 crc kubenswrapper[4975]: I0122 11:45:05.765568 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d11385c-ca1a-4cf2-b9e3-e7c9000de99c" path="/var/lib/kubelet/pods/1d11385c-ca1a-4cf2-b9e3-e7c9000de99c/volumes" Jan 22 11:45:27 crc kubenswrapper[4975]: I0122 11:45:27.436557 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:45:27 crc kubenswrapper[4975]: I0122 11:45:27.437188 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:45:34 crc kubenswrapper[4975]: I0122 11:45:34.472689 4975 scope.go:117] "RemoveContainer" containerID="c79523c64b9dd36984010a8b02ab3aed6285c748d6264bc929aa022d06dce4f6" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.275077 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gc56m/must-gather-wnh2d"] Jan 22 11:45:35 crc kubenswrapper[4975]: E0122 11:45:35.275749 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7d13-4fd4-4724-92cb-0a2c16d58754" containerName="collect-profiles" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.275775 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7d13-4fd4-4724-92cb-0a2c16d58754" containerName="collect-profiles" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.276010 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dec7d13-4fd4-4724-92cb-0a2c16d58754" containerName="collect-profiles" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.277246 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.279091 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gc56m"/"openshift-service-ca.crt" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.279188 4975 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gc56m"/"kube-root-ca.crt" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.281462 4975 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gc56m"/"default-dockercfg-c9c82" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.286215 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gc56m/must-gather-wnh2d"] Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.350412 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/18541382-fcdb-4c06-a929-3753b373d967-must-gather-output\") pod \"must-gather-wnh2d\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.350612 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xrmq\" (UniqueName: \"kubernetes.io/projected/18541382-fcdb-4c06-a929-3753b373d967-kube-api-access-4xrmq\") pod \"must-gather-wnh2d\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.452989 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xrmq\" (UniqueName: \"kubernetes.io/projected/18541382-fcdb-4c06-a929-3753b373d967-kube-api-access-4xrmq\") pod \"must-gather-wnh2d\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.453176 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/18541382-fcdb-4c06-a929-3753b373d967-must-gather-output\") pod \"must-gather-wnh2d\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.454392 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/18541382-fcdb-4c06-a929-3753b373d967-must-gather-output\") pod \"must-gather-wnh2d\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.476446 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xrmq\" (UniqueName: \"kubernetes.io/projected/18541382-fcdb-4c06-a929-3753b373d967-kube-api-access-4xrmq\") pod \"must-gather-wnh2d\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:35 crc kubenswrapper[4975]: I0122 11:45:35.599017 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:45:36 crc kubenswrapper[4975]: I0122 11:45:36.091055 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gc56m/must-gather-wnh2d"] Jan 22 11:45:36 crc kubenswrapper[4975]: I0122 11:45:36.256512 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/must-gather-wnh2d" event={"ID":"18541382-fcdb-4c06-a929-3753b373d967","Type":"ContainerStarted","Data":"fffaff0017d01e0b128a0f2fad0b2a50605d1cff7484a91f2433316ad1bd57a5"} Jan 22 11:45:36 crc kubenswrapper[4975]: I0122 11:45:36.993107 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zsp5v"] Jan 22 11:45:36 crc kubenswrapper[4975]: I0122 11:45:36.994963 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.011947 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsp5v"] Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.089184 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-catalog-content\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.089379 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-utilities\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.089677 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7fn\" (UniqueName: \"kubernetes.io/projected/2458dd79-164b-44dc-9291-daa5b2df61fd-kube-api-access-rn7fn\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.191113 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7fn\" (UniqueName: \"kubernetes.io/projected/2458dd79-164b-44dc-9291-daa5b2df61fd-kube-api-access-rn7fn\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.191232 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-catalog-content\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.191303 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-utilities\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.191886 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-utilities\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.192558 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-catalog-content\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.211421 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7fn\" (UniqueName: \"kubernetes.io/projected/2458dd79-164b-44dc-9291-daa5b2df61fd-kube-api-access-rn7fn\") pod \"redhat-marketplace-zsp5v\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.270182 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/must-gather-wnh2d" event={"ID":"18541382-fcdb-4c06-a929-3753b373d967","Type":"ContainerStarted","Data":"191bf25bb682da6f7daf33b9082fced0d375905d9620bfcc448b1c16716e2686"} Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.270256 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/must-gather-wnh2d" event={"ID":"18541382-fcdb-4c06-a929-3753b373d967","Type":"ContainerStarted","Data":"6862723075b7bd76fb9e8451ecb6c313874d5a39186b5c07d0838a64382c6429"} Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.286304 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gc56m/must-gather-wnh2d" podStartSLOduration=2.28628247 podStartE2EDuration="2.28628247s" podCreationTimestamp="2026-01-22 11:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:37.28445655 +0000 UTC m=+4129.942492670" watchObservedRunningTime="2026-01-22 11:45:37.28628247 +0000 UTC m=+4129.944318580" Jan 22 11:45:37 crc kubenswrapper[4975]: I0122 11:45:37.314636 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:38 crc kubenswrapper[4975]: I0122 11:45:37.788139 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsp5v"] Jan 22 11:45:38 crc kubenswrapper[4975]: I0122 11:45:38.304989 4975 generic.go:334] "Generic (PLEG): container finished" podID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerID="77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228" exitCode=0 Jan 22 11:45:38 crc kubenswrapper[4975]: I0122 11:45:38.306787 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsp5v" event={"ID":"2458dd79-164b-44dc-9291-daa5b2df61fd","Type":"ContainerDied","Data":"77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228"} Jan 22 11:45:38 crc kubenswrapper[4975]: I0122 11:45:38.306821 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsp5v" event={"ID":"2458dd79-164b-44dc-9291-daa5b2df61fd","Type":"ContainerStarted","Data":"54f843d14ea84f3baa6f921595c9ef1e7d120ec9b04952f1290cbe102aab77e7"} Jan 22 11:45:39 crc kubenswrapper[4975]: I0122 11:45:39.989317 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gc56m/crc-debug-5rcnk"] Jan 22 11:45:39 crc kubenswrapper[4975]: I0122 11:45:39.991368 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.046983 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-host\") pod \"crc-debug-5rcnk\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.047318 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7c4\" (UniqueName: \"kubernetes.io/projected/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-kube-api-access-bm7c4\") pod \"crc-debug-5rcnk\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.149502 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-host\") pod \"crc-debug-5rcnk\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.149597 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm7c4\" (UniqueName: \"kubernetes.io/projected/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-kube-api-access-bm7c4\") pod \"crc-debug-5rcnk\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.149627 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-host\") pod \"crc-debug-5rcnk\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.176449 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm7c4\" (UniqueName: \"kubernetes.io/projected/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-kube-api-access-bm7c4\") pod \"crc-debug-5rcnk\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.308545 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.319741 4975 generic.go:334] "Generic (PLEG): container finished" podID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerID="efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2" exitCode=0 Jan 22 11:45:40 crc kubenswrapper[4975]: I0122 11:45:40.319793 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsp5v" event={"ID":"2458dd79-164b-44dc-9291-daa5b2df61fd","Type":"ContainerDied","Data":"efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2"} Jan 22 11:45:41 crc kubenswrapper[4975]: I0122 11:45:41.330582 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" event={"ID":"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db","Type":"ContainerStarted","Data":"d513bdfa9126bd846559cf5ed3bae38ad2c3c012e01854b696b92bf5984e2c31"} Jan 22 11:45:41 crc kubenswrapper[4975]: I0122 11:45:41.330879 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" event={"ID":"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db","Type":"ContainerStarted","Data":"bacfc90e16bef5a1598bf58bd0e8fe932bf03bbc2fe1b763442cd328fe31905f"} Jan 22 11:45:41 crc kubenswrapper[4975]: I0122 11:45:41.346480 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsp5v" event={"ID":"2458dd79-164b-44dc-9291-daa5b2df61fd","Type":"ContainerStarted","Data":"0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c"} Jan 22 11:45:41 crc kubenswrapper[4975]: I0122 11:45:41.361013 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" podStartSLOduration=2.360989044 podStartE2EDuration="2.360989044s" podCreationTimestamp="2026-01-22 11:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:41.350919423 +0000 UTC m=+4134.008955533" watchObservedRunningTime="2026-01-22 11:45:41.360989044 +0000 UTC m=+4134.019025154" Jan 22 11:45:41 crc kubenswrapper[4975]: I0122 11:45:41.380750 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zsp5v" podStartSLOduration=2.872721597 podStartE2EDuration="5.380728169s" podCreationTimestamp="2026-01-22 11:45:36 +0000 UTC" firstStartedPulling="2026-01-22 11:45:38.307595492 +0000 UTC m=+4130.965631602" lastFinishedPulling="2026-01-22 11:45:40.815602064 +0000 UTC m=+4133.473638174" observedRunningTime="2026-01-22 11:45:41.372654391 +0000 UTC m=+4134.030690501" watchObservedRunningTime="2026-01-22 11:45:41.380728169 +0000 UTC m=+4134.038764289" Jan 22 11:45:47 crc kubenswrapper[4975]: I0122 11:45:47.315264 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:47 crc kubenswrapper[4975]: I0122 11:45:47.315861 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:47 crc kubenswrapper[4975]: I0122 11:45:47.365115 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:47 crc kubenswrapper[4975]: I0122 11:45:47.467754 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:47 crc kubenswrapper[4975]: I0122 11:45:47.598828 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsp5v"] Jan 22 11:45:49 crc kubenswrapper[4975]: I0122 11:45:49.414617 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zsp5v" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="registry-server" containerID="cri-o://0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c" gracePeriod=2 Jan 22 11:45:49 crc kubenswrapper[4975]: I0122 11:45:49.919501 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.032823 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-catalog-content\") pod \"2458dd79-164b-44dc-9291-daa5b2df61fd\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.032938 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-utilities\") pod \"2458dd79-164b-44dc-9291-daa5b2df61fd\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.032974 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn7fn\" (UniqueName: \"kubernetes.io/projected/2458dd79-164b-44dc-9291-daa5b2df61fd-kube-api-access-rn7fn\") pod \"2458dd79-164b-44dc-9291-daa5b2df61fd\" (UID: \"2458dd79-164b-44dc-9291-daa5b2df61fd\") " Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.034770 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-utilities" (OuterVolumeSpecName: "utilities") pod "2458dd79-164b-44dc-9291-daa5b2df61fd" (UID: "2458dd79-164b-44dc-9291-daa5b2df61fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.061929 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2458dd79-164b-44dc-9291-daa5b2df61fd-kube-api-access-rn7fn" (OuterVolumeSpecName: "kube-api-access-rn7fn") pod "2458dd79-164b-44dc-9291-daa5b2df61fd" (UID: "2458dd79-164b-44dc-9291-daa5b2df61fd"). InnerVolumeSpecName "kube-api-access-rn7fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.112456 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2458dd79-164b-44dc-9291-daa5b2df61fd" (UID: "2458dd79-164b-44dc-9291-daa5b2df61fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.135151 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.135188 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2458dd79-164b-44dc-9291-daa5b2df61fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.135199 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn7fn\" (UniqueName: \"kubernetes.io/projected/2458dd79-164b-44dc-9291-daa5b2df61fd-kube-api-access-rn7fn\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.431165 4975 generic.go:334] "Generic (PLEG): container finished" podID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerID="0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c" exitCode=0 Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.431225 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsp5v" event={"ID":"2458dd79-164b-44dc-9291-daa5b2df61fd","Type":"ContainerDied","Data":"0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c"} Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.431307 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsp5v" event={"ID":"2458dd79-164b-44dc-9291-daa5b2df61fd","Type":"ContainerDied","Data":"54f843d14ea84f3baa6f921595c9ef1e7d120ec9b04952f1290cbe102aab77e7"} Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.431251 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsp5v" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.431435 4975 scope.go:117] "RemoveContainer" containerID="0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.470473 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsp5v"] Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.479241 4975 scope.go:117] "RemoveContainer" containerID="efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.481906 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsp5v"] Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.519080 4975 scope.go:117] "RemoveContainer" containerID="77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.569619 4975 scope.go:117] "RemoveContainer" containerID="0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c" Jan 22 11:45:50 crc kubenswrapper[4975]: E0122 11:45:50.570085 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c\": container with ID starting with 0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c not found: ID does not exist" containerID="0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.570138 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c"} err="failed to get container status \"0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c\": rpc error: code = NotFound desc = could not find container \"0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c\": container with ID starting with 0671be786de639c8db72a7493949177b934b59d800ef407b6b0e7c3b738ed21c not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.570173 4975 scope.go:117] "RemoveContainer" containerID="efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2" Jan 22 11:45:50 crc kubenswrapper[4975]: E0122 11:45:50.570555 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2\": container with ID starting with efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2 not found: ID does not exist" containerID="efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.570603 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2"} err="failed to get container status \"efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2\": rpc error: code = NotFound desc = could not find container \"efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2\": container with ID starting with efbd4c6c91b691a6d674bea8cd74e03fb2ff3f441c7e604954290e9ccd6c15a2 not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.570636 4975 scope.go:117] "RemoveContainer" containerID="77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228" Jan 22 11:45:50 crc kubenswrapper[4975]: E0122 11:45:50.570873 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228\": container with ID starting with 77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228 not found: ID does not exist" containerID="77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228" Jan 22 11:45:50 crc kubenswrapper[4975]: I0122 11:45:50.570897 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228"} err="failed to get container status \"77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228\": rpc error: code = NotFound desc = could not find container \"77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228\": container with ID starting with 77f801a1ac3f168cd178fa4402f75fa815ab348892bc30851221d7594bb8c228 not found: ID does not exist" Jan 22 11:45:51 crc kubenswrapper[4975]: I0122 11:45:51.776602 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" path="/var/lib/kubelet/pods/2458dd79-164b-44dc-9291-daa5b2df61fd/volumes" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.849502 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qwv98"] Jan 22 11:45:52 crc kubenswrapper[4975]: E0122 11:45:52.850983 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="extract-content" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.851057 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="extract-content" Jan 22 11:45:52 crc kubenswrapper[4975]: E0122 11:45:52.851127 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="registry-server" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.851176 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="registry-server" Jan 22 11:45:52 crc kubenswrapper[4975]: E0122 11:45:52.851255 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="extract-utilities" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.851310 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="extract-utilities" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.851586 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="2458dd79-164b-44dc-9291-daa5b2df61fd" containerName="registry-server" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.853545 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.868906 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qwv98"] Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.995495 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhwp\" (UniqueName: \"kubernetes.io/projected/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-kube-api-access-rdhwp\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.995565 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-catalog-content\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:52 crc kubenswrapper[4975]: I0122 11:45:52.996104 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-utilities\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.097864 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdhwp\" (UniqueName: \"kubernetes.io/projected/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-kube-api-access-rdhwp\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.098154 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-catalog-content\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.098309 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-utilities\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.098695 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-catalog-content\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.098819 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-utilities\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.122231 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdhwp\" (UniqueName: \"kubernetes.io/projected/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-kube-api-access-rdhwp\") pod \"community-operators-qwv98\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:53 crc kubenswrapper[4975]: I0122 11:45:53.176576 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:45:54 crc kubenswrapper[4975]: I0122 11:45:54.040433 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qwv98"] Jan 22 11:45:54 crc kubenswrapper[4975]: I0122 11:45:54.484696 4975 generic.go:334] "Generic (PLEG): container finished" podID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerID="fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873" exitCode=0 Jan 22 11:45:54 crc kubenswrapper[4975]: I0122 11:45:54.484879 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerDied","Data":"fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873"} Jan 22 11:45:54 crc kubenswrapper[4975]: I0122 11:45:54.484967 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerStarted","Data":"ea9b73694e6b793855c11bfd769d1d55c3fbb566b55b37ce9e9709441b398ed6"} Jan 22 11:45:55 crc kubenswrapper[4975]: I0122 11:45:55.495412 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerStarted","Data":"496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683"} Jan 22 11:45:56 crc kubenswrapper[4975]: I0122 11:45:56.505401 4975 generic.go:334] "Generic (PLEG): container finished" podID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerID="496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683" exitCode=0 Jan 22 11:45:56 crc kubenswrapper[4975]: I0122 11:45:56.505486 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerDied","Data":"496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683"} Jan 22 11:45:57 crc kubenswrapper[4975]: I0122 11:45:57.436214 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:45:57 crc kubenswrapper[4975]: I0122 11:45:57.436950 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:45:57 crc kubenswrapper[4975]: I0122 11:45:57.515815 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerStarted","Data":"bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb"} Jan 22 11:45:57 crc kubenswrapper[4975]: I0122 11:45:57.539350 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qwv98" podStartSLOduration=3.107774974 podStartE2EDuration="5.539313746s" podCreationTimestamp="2026-01-22 11:45:52 +0000 UTC" firstStartedPulling="2026-01-22 11:45:54.486420666 +0000 UTC m=+4147.144456776" lastFinishedPulling="2026-01-22 11:45:56.917959438 +0000 UTC m=+4149.575995548" observedRunningTime="2026-01-22 11:45:57.536377945 +0000 UTC m=+4150.194414075" watchObservedRunningTime="2026-01-22 11:45:57.539313746 +0000 UTC m=+4150.197349856" Jan 22 11:46:03 crc kubenswrapper[4975]: I0122 11:46:03.177129 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:46:03 crc kubenswrapper[4975]: I0122 11:46:03.178199 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:46:03 crc kubenswrapper[4975]: I0122 11:46:03.404907 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:46:03 crc kubenswrapper[4975]: I0122 11:46:03.660830 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:46:03 crc kubenswrapper[4975]: I0122 11:46:03.708990 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qwv98"] Jan 22 11:46:05 crc kubenswrapper[4975]: I0122 11:46:05.582983 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qwv98" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="registry-server" containerID="cri-o://bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb" gracePeriod=2 Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.099153 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.272783 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-utilities\") pod \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.272971 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdhwp\" (UniqueName: \"kubernetes.io/projected/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-kube-api-access-rdhwp\") pod \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.273035 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-catalog-content\") pod \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\" (UID: \"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f\") " Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.273624 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-utilities" (OuterVolumeSpecName: "utilities") pod "8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" (UID: "8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.282525 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-kube-api-access-rdhwp" (OuterVolumeSpecName: "kube-api-access-rdhwp") pod "8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" (UID: "8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f"). InnerVolumeSpecName "kube-api-access-rdhwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.341708 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" (UID: "8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.375530 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdhwp\" (UniqueName: \"kubernetes.io/projected/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-kube-api-access-rdhwp\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.375563 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.375573 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.594259 4975 generic.go:334] "Generic (PLEG): container finished" podID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerID="bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb" exitCode=0 Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.594303 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerDied","Data":"bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb"} Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.594306 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwv98" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.594376 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwv98" event={"ID":"8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f","Type":"ContainerDied","Data":"ea9b73694e6b793855c11bfd769d1d55c3fbb566b55b37ce9e9709441b398ed6"} Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.594399 4975 scope.go:117] "RemoveContainer" containerID="bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.637998 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qwv98"] Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.650015 4975 scope.go:117] "RemoveContainer" containerID="496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.654565 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qwv98"] Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.678789 4975 scope.go:117] "RemoveContainer" containerID="fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.724767 4975 scope.go:117] "RemoveContainer" containerID="bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb" Jan 22 11:46:06 crc kubenswrapper[4975]: E0122 11:46:06.725235 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb\": container with ID starting with bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb not found: ID does not exist" containerID="bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.725317 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb"} err="failed to get container status \"bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb\": rpc error: code = NotFound desc = could not find container \"bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb\": container with ID starting with bc78e9f04e9c810ee8b0870214447e7a0af0581d5ec14c3d168bb971ea01a9cb not found: ID does not exist" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.725357 4975 scope.go:117] "RemoveContainer" containerID="496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683" Jan 22 11:46:06 crc kubenswrapper[4975]: E0122 11:46:06.725754 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683\": container with ID starting with 496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683 not found: ID does not exist" containerID="496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.725778 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683"} err="failed to get container status \"496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683\": rpc error: code = NotFound desc = could not find container \"496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683\": container with ID starting with 496417eb57cf3aeef31971a9ae2430280b1f72f10f00665d4f01e3d896dc4683 not found: ID does not exist" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.725792 4975 scope.go:117] "RemoveContainer" containerID="fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873" Jan 22 11:46:06 crc kubenswrapper[4975]: E0122 11:46:06.726140 4975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873\": container with ID starting with fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873 not found: ID does not exist" containerID="fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873" Jan 22 11:46:06 crc kubenswrapper[4975]: I0122 11:46:06.726165 4975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873"} err="failed to get container status \"fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873\": rpc error: code = NotFound desc = could not find container \"fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873\": container with ID starting with fec0dc48a01ac1262799b6b1692c6230d3ac39057c1fdc8bda3d35dfb2fee873 not found: ID does not exist" Jan 22 11:46:07 crc kubenswrapper[4975]: I0122 11:46:07.779429 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" path="/var/lib/kubelet/pods/8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f/volumes" Jan 22 11:46:16 crc kubenswrapper[4975]: I0122 11:46:16.680897 4975 generic.go:334] "Generic (PLEG): container finished" podID="ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" containerID="d513bdfa9126bd846559cf5ed3bae38ad2c3c012e01854b696b92bf5984e2c31" exitCode=0 Jan 22 11:46:16 crc kubenswrapper[4975]: I0122 11:46:16.680986 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" event={"ID":"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db","Type":"ContainerDied","Data":"d513bdfa9126bd846559cf5ed3bae38ad2c3c012e01854b696b92bf5984e2c31"} Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.806238 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.842159 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gc56m/crc-debug-5rcnk"] Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.855621 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gc56m/crc-debug-5rcnk"] Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.881581 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm7c4\" (UniqueName: \"kubernetes.io/projected/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-kube-api-access-bm7c4\") pod \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.881698 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-host\") pod \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\" (UID: \"ccb52725-c19a-4ee8-afc8-c7d64ca2e9db\") " Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.881807 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-host" (OuterVolumeSpecName: "host") pod "ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" (UID: "ccb52725-c19a-4ee8-afc8-c7d64ca2e9db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.882156 4975 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-host\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.887195 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-kube-api-access-bm7c4" (OuterVolumeSpecName: "kube-api-access-bm7c4") pod "ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" (UID: "ccb52725-c19a-4ee8-afc8-c7d64ca2e9db"). InnerVolumeSpecName "kube-api-access-bm7c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:17 crc kubenswrapper[4975]: I0122 11:46:17.984284 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm7c4\" (UniqueName: \"kubernetes.io/projected/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db-kube-api-access-bm7c4\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:18 crc kubenswrapper[4975]: I0122 11:46:18.701308 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bacfc90e16bef5a1598bf58bd0e8fe932bf03bbc2fe1b763442cd328fe31905f" Jan 22 11:46:18 crc kubenswrapper[4975]: I0122 11:46:18.701674 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5rcnk" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.036761 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gc56m/crc-debug-4xxqv"] Jan 22 11:46:19 crc kubenswrapper[4975]: E0122 11:46:19.037321 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="registry-server" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.037368 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="registry-server" Jan 22 11:46:19 crc kubenswrapper[4975]: E0122 11:46:19.037391 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="extract-content" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.037404 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="extract-content" Jan 22 11:46:19 crc kubenswrapper[4975]: E0122 11:46:19.037423 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" containerName="container-00" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.037436 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" containerName="container-00" Jan 22 11:46:19 crc kubenswrapper[4975]: E0122 11:46:19.037464 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="extract-utilities" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.037476 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="extract-utilities" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.037799 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbeaf88-9a36-4a1d-90aa-f8e084bebc6f" containerName="registry-server" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.037821 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" containerName="container-00" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.038756 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.106397 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7652c9d-2e80-4162-a5ab-b1d40cf28334-host\") pod \"crc-debug-4xxqv\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.106745 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bvk\" (UniqueName: \"kubernetes.io/projected/d7652c9d-2e80-4162-a5ab-b1d40cf28334-kube-api-access-r8bvk\") pod \"crc-debug-4xxqv\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.209176 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7652c9d-2e80-4162-a5ab-b1d40cf28334-host\") pod \"crc-debug-4xxqv\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.209376 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7652c9d-2e80-4162-a5ab-b1d40cf28334-host\") pod \"crc-debug-4xxqv\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.209376 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bvk\" (UniqueName: \"kubernetes.io/projected/d7652c9d-2e80-4162-a5ab-b1d40cf28334-kube-api-access-r8bvk\") pod \"crc-debug-4xxqv\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.233299 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bvk\" (UniqueName: \"kubernetes.io/projected/d7652c9d-2e80-4162-a5ab-b1d40cf28334-kube-api-access-r8bvk\") pod \"crc-debug-4xxqv\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.354632 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.712763 4975 generic.go:334] "Generic (PLEG): container finished" podID="d7652c9d-2e80-4162-a5ab-b1d40cf28334" containerID="f86f39faa5b1e925deaa7c6f5e952853d599c5123b04c6d9014d71b126b38a80" exitCode=0 Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.712880 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-4xxqv" event={"ID":"d7652c9d-2e80-4162-a5ab-b1d40cf28334","Type":"ContainerDied","Data":"f86f39faa5b1e925deaa7c6f5e952853d599c5123b04c6d9014d71b126b38a80"} Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.713043 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-4xxqv" event={"ID":"d7652c9d-2e80-4162-a5ab-b1d40cf28334","Type":"ContainerStarted","Data":"d521eb576788457da2698657dcf1897bd2ef8ed7544731d5fd5a53e13b459335"} Jan 22 11:46:19 crc kubenswrapper[4975]: I0122 11:46:19.770522 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb52725-c19a-4ee8-afc8-c7d64ca2e9db" path="/var/lib/kubelet/pods/ccb52725-c19a-4ee8-afc8-c7d64ca2e9db/volumes" Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.218961 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gc56m/crc-debug-4xxqv"] Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.229973 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gc56m/crc-debug-4xxqv"] Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.832394 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.941743 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8bvk\" (UniqueName: \"kubernetes.io/projected/d7652c9d-2e80-4162-a5ab-b1d40cf28334-kube-api-access-r8bvk\") pod \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.941806 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7652c9d-2e80-4162-a5ab-b1d40cf28334-host\") pod \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\" (UID: \"d7652c9d-2e80-4162-a5ab-b1d40cf28334\") " Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.942136 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7652c9d-2e80-4162-a5ab-b1d40cf28334-host" (OuterVolumeSpecName: "host") pod "d7652c9d-2e80-4162-a5ab-b1d40cf28334" (UID: "d7652c9d-2e80-4162-a5ab-b1d40cf28334"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.942520 4975 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7652c9d-2e80-4162-a5ab-b1d40cf28334-host\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:20 crc kubenswrapper[4975]: I0122 11:46:20.946887 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7652c9d-2e80-4162-a5ab-b1d40cf28334-kube-api-access-r8bvk" (OuterVolumeSpecName: "kube-api-access-r8bvk") pod "d7652c9d-2e80-4162-a5ab-b1d40cf28334" (UID: "d7652c9d-2e80-4162-a5ab-b1d40cf28334"). InnerVolumeSpecName "kube-api-access-r8bvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.047167 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8bvk\" (UniqueName: \"kubernetes.io/projected/d7652c9d-2e80-4162-a5ab-b1d40cf28334-kube-api-access-r8bvk\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.493098 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gc56m/crc-debug-5zhx8"] Jan 22 11:46:21 crc kubenswrapper[4975]: E0122 11:46:21.493708 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7652c9d-2e80-4162-a5ab-b1d40cf28334" containerName="container-00" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.493720 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7652c9d-2e80-4162-a5ab-b1d40cf28334" containerName="container-00" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.493885 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7652c9d-2e80-4162-a5ab-b1d40cf28334" containerName="container-00" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.494520 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.559455 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-host\") pod \"crc-debug-5zhx8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.559553 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dqlk\" (UniqueName: \"kubernetes.io/projected/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-kube-api-access-6dqlk\") pod \"crc-debug-5zhx8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.662166 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-host\") pod \"crc-debug-5zhx8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.662459 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dqlk\" (UniqueName: \"kubernetes.io/projected/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-kube-api-access-6dqlk\") pod \"crc-debug-5zhx8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.663172 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-host\") pod \"crc-debug-5zhx8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.688485 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dqlk\" (UniqueName: \"kubernetes.io/projected/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-kube-api-access-6dqlk\") pod \"crc-debug-5zhx8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.740075 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d521eb576788457da2698657dcf1897bd2ef8ed7544731d5fd5a53e13b459335" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.740440 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-4xxqv" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.768439 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7652c9d-2e80-4162-a5ab-b1d40cf28334" path="/var/lib/kubelet/pods/d7652c9d-2e80-4162-a5ab-b1d40cf28334/volumes" Jan 22 11:46:21 crc kubenswrapper[4975]: I0122 11:46:21.811295 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:21 crc kubenswrapper[4975]: W0122 11:46:21.847766 4975 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d7a8ab0_2d29_46b6_9b72_4ac877c92db8.slice/crio-80b2cc202d71bef2f839e111fea7c223a06289645876f4a1066c461df7d9ec61 WatchSource:0}: Error finding container 80b2cc202d71bef2f839e111fea7c223a06289645876f4a1066c461df7d9ec61: Status 404 returned error can't find the container with id 80b2cc202d71bef2f839e111fea7c223a06289645876f4a1066c461df7d9ec61 Jan 22 11:46:22 crc kubenswrapper[4975]: I0122 11:46:22.749773 4975 generic.go:334] "Generic (PLEG): container finished" podID="0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" containerID="6a85497adea58bf7122da49de4acda039e49a26523665332f9cac5eec56410f1" exitCode=0 Jan 22 11:46:22 crc kubenswrapper[4975]: I0122 11:46:22.749859 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-5zhx8" event={"ID":"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8","Type":"ContainerDied","Data":"6a85497adea58bf7122da49de4acda039e49a26523665332f9cac5eec56410f1"} Jan 22 11:46:22 crc kubenswrapper[4975]: I0122 11:46:22.750375 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/crc-debug-5zhx8" event={"ID":"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8","Type":"ContainerStarted","Data":"80b2cc202d71bef2f839e111fea7c223a06289645876f4a1066c461df7d9ec61"} Jan 22 11:46:22 crc kubenswrapper[4975]: I0122 11:46:22.787538 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gc56m/crc-debug-5zhx8"] Jan 22 11:46:22 crc kubenswrapper[4975]: I0122 11:46:22.798070 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gc56m/crc-debug-5zhx8"] Jan 22 11:46:23 crc kubenswrapper[4975]: I0122 11:46:23.872322 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.005890 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-host\") pod \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.006018 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-host" (OuterVolumeSpecName: "host") pod "0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" (UID: "0d7a8ab0-2d29-46b6-9b72-4ac877c92db8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.006070 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dqlk\" (UniqueName: \"kubernetes.io/projected/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-kube-api-access-6dqlk\") pod \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\" (UID: \"0d7a8ab0-2d29-46b6-9b72-4ac877c92db8\") " Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.006631 4975 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-host\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.460545 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-kube-api-access-6dqlk" (OuterVolumeSpecName: "kube-api-access-6dqlk") pod "0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" (UID: "0d7a8ab0-2d29-46b6-9b72-4ac877c92db8"). InnerVolumeSpecName "kube-api-access-6dqlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.515605 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dqlk\" (UniqueName: \"kubernetes.io/projected/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8-kube-api-access-6dqlk\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.785947 4975 scope.go:117] "RemoveContainer" containerID="6a85497adea58bf7122da49de4acda039e49a26523665332f9cac5eec56410f1" Jan 22 11:46:24 crc kubenswrapper[4975]: I0122 11:46:24.786001 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/crc-debug-5zhx8" Jan 22 11:46:25 crc kubenswrapper[4975]: I0122 11:46:25.772417 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" path="/var/lib/kubelet/pods/0d7a8ab0-2d29-46b6-9b72-4ac877c92db8/volumes" Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.436155 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.436518 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.436583 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.437324 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05b554e15b11f29874876619dcf16d3adf05366b2e8a168deeb6362afa5a541c"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.437406 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://05b554e15b11f29874876619dcf16d3adf05366b2e8a168deeb6362afa5a541c" gracePeriod=600 Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.820076 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="05b554e15b11f29874876619dcf16d3adf05366b2e8a168deeb6362afa5a541c" exitCode=0 Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.820161 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"05b554e15b11f29874876619dcf16d3adf05366b2e8a168deeb6362afa5a541c"} Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.820671 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerStarted","Data":"c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad"} Jan 22 11:46:27 crc kubenswrapper[4975]: I0122 11:46:27.820790 4975 scope.go:117] "RemoveContainer" containerID="bc1845ab21b8a1b91d00079278081a97c603b9fa2d1cfebca40a2940c87ecf05" Jan 22 11:46:58 crc kubenswrapper[4975]: I0122 11:46:58.965697 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d584b4fcd-5llj7_fa760241-75cd-4e21-8b47-5e367c1cff52/barbican-api/0.log" Jan 22 11:46:59 crc kubenswrapper[4975]: I0122 11:46:59.067090 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d584b4fcd-5llj7_fa760241-75cd-4e21-8b47-5e367c1cff52/barbican-api-log/0.log" Jan 22 11:46:59 crc kubenswrapper[4975]: I0122 11:46:59.634356 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749cddb87d-66769_d7922a3a-ab4e-4400-9da2-b01efcf11ca7/barbican-keystone-listener-log/0.log" Jan 22 11:46:59 crc kubenswrapper[4975]: I0122 11:46:59.634764 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749cddb87d-66769_d7922a3a-ab4e-4400-9da2-b01efcf11ca7/barbican-keystone-listener/0.log" Jan 22 11:46:59 crc kubenswrapper[4975]: I0122 11:46:59.679116 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f5cf4887-x2cb8_4c2be393-e3cf-4356-9f2e-6334f7ee99cc/barbican-worker/0.log" Jan 22 11:46:59 crc kubenswrapper[4975]: I0122 11:46:59.854053 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5kr4c_3e928e99-1e6c-4d3e-a7f3-bd725e29c415/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:46:59 crc kubenswrapper[4975]: I0122 11:46:59.891950 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f5cf4887-x2cb8_4c2be393-e3cf-4356-9f2e-6334f7ee99cc/barbican-worker-log/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.060438 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/proxy-httpd/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.095829 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/ceilometer-notification-agent/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.114248 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/ceilometer-central-agent/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.146144 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e8833648-ddd8-4847-b2b0-25d3f8ac08c3/sg-core/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.329745 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b0e51991-42f2-4b42-a988-73e2dd891b4d/cinder-api/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.351149 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b0e51991-42f2-4b42-a988-73e2dd891b4d/cinder-api-log/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.528781 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0be551d1-c8bf-440d-b0af-1b40cb1d7e8a/cinder-scheduler/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.585040 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0be551d1-c8bf-440d-b0af-1b40cb1d7e8a/probe/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.641804 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-d5spx_a3494e98-461d-491b-96c0-05c360412e74/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.858783 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m82g6_fd0b63c7-33dd-4aab-8f22-9b0c0e4eab00/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:00 crc kubenswrapper[4975]: I0122 11:47:00.905055 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-j5kkw_2266f8c0-fa93-40b2-994b-90c948a2a2f8/init/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.129314 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-j5kkw_2266f8c0-fa93-40b2-994b-90c948a2a2f8/dnsmasq-dns/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.142098 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-j5kkw_2266f8c0-fa93-40b2-994b-90c948a2a2f8/init/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.180855 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-l7t47_a2579a01-95bb-4c7c-8bf5-c0c81586e31d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.324574 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e68830c0-793a-4984-9a65-427420074a97/glance-log/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.359814 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e68830c0-793a-4984-9a65-427420074a97/glance-httpd/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.560101 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_347d672f-7247-4f3d-89dd-f57fa3e0c628/glance-httpd/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.587762 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_347d672f-7247-4f3d-89dd-f57fa3e0c628/glance-log/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.719173 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cb95f4cc6-jdv4m_d2027dc6-aa67-467c-ad8d-e31ccb6d66ee/horizon/0.log" Jan 22 11:47:01 crc kubenswrapper[4975]: I0122 11:47:01.895884 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fzj6v_2235b954-42a4-4665-a60a-e9f4e49b3e45/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:02 crc kubenswrapper[4975]: I0122 11:47:02.088944 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tzwgz_3b6f2238-afde-4cfc-83aa-c5d56a2afeb4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:02 crc kubenswrapper[4975]: I0122 11:47:02.300297 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cb95f4cc6-jdv4m_d2027dc6-aa67-467c-ad8d-e31ccb6d66ee/horizon-log/0.log" Jan 22 11:47:02 crc kubenswrapper[4975]: I0122 11:47:02.372869 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484661-rhjvs_e5fa66cb-0369-41ff-8b6f-d2c2367dfa3f/keystone-cron/0.log" Jan 22 11:47:02 crc kubenswrapper[4975]: I0122 11:47:02.436497 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79786fd675-77kk9_a3a8b4a6-2a15-4752-918f-066167d8699d/keystone-api/0.log" Jan 22 11:47:02 crc kubenswrapper[4975]: I0122 11:47:02.553192 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9f501c78-c779-4060-9262-dbbe23739de6/kube-state-metrics/0.log" Jan 22 11:47:02 crc kubenswrapper[4975]: I0122 11:47:02.816773 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-w6n88_f26c2f80-766b-4f5c-9c50-13fba04daff7/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:03 crc kubenswrapper[4975]: I0122 11:47:03.123542 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54575f849-qtpk8_17be82a9-4845-4b3d-85ca-82489d4687b2/neutron-api/0.log" Jan 22 11:47:03 crc kubenswrapper[4975]: I0122 11:47:03.190608 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54575f849-qtpk8_17be82a9-4845-4b3d-85ca-82489d4687b2/neutron-httpd/0.log" Jan 22 11:47:03 crc kubenswrapper[4975]: I0122 11:47:03.250064 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4bfq2_ec5d036f-4b4e-486e-b3fd-f1ccca7b0e57/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:03 crc kubenswrapper[4975]: I0122 11:47:03.863341 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff6252b0-1c69-4710-9fd2-ab50960f3495/nova-api-log/0.log" Jan 22 11:47:03 crc kubenswrapper[4975]: I0122 11:47:03.920157 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9e829dfc-69fb-4951-b203-53af6945a1e1/nova-cell0-conductor-conductor/0.log" Jan 22 11:47:04 crc kubenswrapper[4975]: I0122 11:47:04.274958 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c07c882-c139-44d5-be3c-79a3a02cab8a/nova-cell1-conductor-conductor/0.log" Jan 22 11:47:04 crc kubenswrapper[4975]: I0122 11:47:04.338581 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff6252b0-1c69-4710-9fd2-ab50960f3495/nova-api-api/0.log" Jan 22 11:47:04 crc kubenswrapper[4975]: I0122 11:47:04.361885 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1a88d799-6e8f-4e1f-9f81-3c30b8d7a2ec/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 11:47:04 crc kubenswrapper[4975]: I0122 11:47:04.561830 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-c25vl_f8154d48-3ee4-45f3-bf77-45befe6cdc3a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:04 crc kubenswrapper[4975]: I0122 11:47:04.653457 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_983309a9-5940-4b5f-b84d-a219bbe96d93/nova-metadata-log/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.081021 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_621048f0-ccdc-4953-97d4-ba5286ff3345/mysql-bootstrap/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.107545 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_86380f07-beeb-467d-aabd-0927af769a35/nova-scheduler-scheduler/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.336541 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_621048f0-ccdc-4953-97d4-ba5286ff3345/mysql-bootstrap/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.354536 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_621048f0-ccdc-4953-97d4-ba5286ff3345/galera/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.569118 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f6d809d7-faf1-4033-a986-f49f903d36f1/mysql-bootstrap/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.850488 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f6d809d7-faf1-4033-a986-f49f903d36f1/galera/0.log" Jan 22 11:47:05 crc kubenswrapper[4975]: I0122 11:47:05.858678 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f6d809d7-faf1-4033-a986-f49f903d36f1/mysql-bootstrap/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.032480 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1217b84a-f2e0-49f9-a237-30f9493f1ca4/openstackclient/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.131554 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kkcjz_b366080b-ad75-4205-9257-9d8a8ac5aed8/openstack-network-exporter/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.341019 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mj5f2_47de946e-512c-49a6-b633-c3c1640fb4bb/ovn-controller/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.361114 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_983309a9-5940-4b5f-b84d-a219bbe96d93/nova-metadata-metadata/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.569688 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovsdb-server-init/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.720309 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovsdb-server/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.726503 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovsdb-server-init/0.log" Jan 22 11:47:06 crc kubenswrapper[4975]: I0122 11:47:06.777718 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jqvh9_90f8cfc5-4b43-4d89-8c42-a39306a0f1df/ovs-vswitchd/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.008186 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dg2vt_0b6be64b-5e04-4797-8617-681e72ebb9d7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.069658 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_13d61e05-db24-4d01-bfc6-5ff85c254d7c/openstack-network-exporter/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.112832 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_13d61e05-db24-4d01-bfc6-5ff85c254d7c/ovn-northd/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.170112 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ce5bdc2-44a6-4071-8df7-736a1992fee0/openstack-network-exporter/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.251669 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3ce5bdc2-44a6-4071-8df7-736a1992fee0/ovsdbserver-nb/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.407958 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a50b6032-49e5-467e-9728-b4c43293dbcf/openstack-network-exporter/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.452235 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a50b6032-49e5-467e-9728-b4c43293dbcf/ovsdbserver-sb/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.699828 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bb784b78-f8g8c_3dbff787-f548-4837-acb9-e1538c513330/placement-log/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.746475 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bb784b78-f8g8c_3dbff787-f548-4837-acb9-e1538c513330/placement-api/0.log" Jan 22 11:47:07 crc kubenswrapper[4975]: I0122 11:47:07.765029 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6110323c-c3aa-4d7b-a201-20d85afa29d3/setup-container/0.log" Jan 22 11:47:08 crc kubenswrapper[4975]: I0122 11:47:08.791164 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6110323c-c3aa-4d7b-a201-20d85afa29d3/setup-container/0.log" Jan 22 11:47:08 crc kubenswrapper[4975]: I0122 11:47:08.888046 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6110323c-c3aa-4d7b-a201-20d85afa29d3/rabbitmq/0.log" Jan 22 11:47:08 crc kubenswrapper[4975]: I0122 11:47:08.947225 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f2d061f-103c-4b8a-aa45-3da02ffebff5/setup-container/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.070521 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f2d061f-103c-4b8a-aa45-3da02ffebff5/setup-container/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.129162 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f2d061f-103c-4b8a-aa45-3da02ffebff5/rabbitmq/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.189144 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h8mxm_9f5109b8-3671-4f96-8d9f-625b4c02b1b6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.405088 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8v6sw_d6c907cc-684d-40a8-905a-977845138321/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.414709 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-g2x7p_f15014da-c77d-4583-9547-dceec5d6628a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.592253 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-flsmq_36d854d2-b0f2-4864-b614-715d94ba5631/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:09 crc kubenswrapper[4975]: I0122 11:47:09.623732 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fs85s_f3c65871-e82d-4687-99c7-4af5194d7c98/ssh-known-hosts-edpm-deployment/0.log" Jan 22 11:47:10 crc kubenswrapper[4975]: I0122 11:47:10.012706 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78ffc5fbc9-46msq_91f6b2db-2af6-4817-a5cc-9c04fe10070f/proxy-httpd/0.log" Jan 22 11:47:10 crc kubenswrapper[4975]: I0122 11:47:10.552734 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78ffc5fbc9-46msq_91f6b2db-2af6-4817-a5cc-9c04fe10070f/proxy-server/0.log" Jan 22 11:47:10 crc kubenswrapper[4975]: I0122 11:47:10.785660 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-auditor/0.log" Jan 22 11:47:10 crc kubenswrapper[4975]: I0122 11:47:10.787125 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7kqbs_ee721893-cf9b-4712-b421-19d6a59d7cf7/swift-ring-rebalance/0.log" Jan 22 11:47:10 crc kubenswrapper[4975]: I0122 11:47:10.995166 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-reaper/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.078464 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-auditor/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.080031 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-replicator/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.123414 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/account-server/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.307315 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-replicator/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.312884 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-server/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.341698 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/container-updater/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.383698 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-auditor/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.515239 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-expirer/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.577671 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-server/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.599373 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-replicator/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.602058 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/object-updater/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.788689 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/swift-recon-cron/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.796348 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_258efd08-8841-42a5-bfff-035926fa4a8a/rsync/0.log" Jan 22 11:47:11 crc kubenswrapper[4975]: I0122 11:47:11.920833 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j5frs_89ca7561-30de-4093-8f2f-09455a740f94/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:12 crc kubenswrapper[4975]: I0122 11:47:12.010870 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2401e7da-ce1d-45c2-b62e-bd617338ac17/tempest-tests-tempest-tests-runner/0.log" Jan 22 11:47:12 crc kubenswrapper[4975]: I0122 11:47:12.213140 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3a80b919-f543-4c35-8432-79eb4b8c9c2a/test-operator-logs-container/0.log" Jan 22 11:47:12 crc kubenswrapper[4975]: I0122 11:47:12.311313 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bk27r_1f779aa8-c70d-46ba-a278-b4062a5ddd01/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 11:47:23 crc kubenswrapper[4975]: I0122 11:47:23.329905 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5525f79d-14ec-480a-b69f-7c003fdc6dba/memcached/0.log" Jan 22 11:47:41 crc kubenswrapper[4975]: I0122 11:47:41.442588 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/util/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.031281 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/util/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.073855 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/pull/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.089234 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/pull/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.251530 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/extract/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.260876 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/util/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.281930 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3502a955001389a16e834eb5f2b8fd5da603dc9bd8a5caca409325af58d2f6t_d9c59824-f969-44a3-ae16-9e2d998ca883/pull/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.526161 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-5w2lh_f1fa565f-67df-4a8f-9f10-33397d530bbb/manager/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.543363 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-wsw9j_1cb3165d-a089-4c3d-9d70-73145248cd5c/manager/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.677298 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6dln8_17061bf9-682c-4294-950a-18a5c5a00577/manager/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.821363 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-hgw2h_9b23e2d7-2430-4770-bb64-b2d8a14c2f7c/manager/0.log" Jan 22 11:47:42 crc kubenswrapper[4975]: I0122 11:47:42.882028 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-kfvm6_e8365e14-67ba-495f-a996-a2b63c7a391e/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.016902 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g49ht_96fb0fc0-a7b6-4607-b879-9bfb9f318742/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.253124 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dm74c_962350e2-bd43-433c-af57-8094db573873/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.340314 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-s5qw6_fe34d1a8-dae0-4b05-b51f-9f577a2ff8de/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.460160 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-jk546_f1fc9c22-ce13-4bde-8901-08225613757a/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.520222 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-48bd6_ff361e81-48c6-4926-84d9-fcca6fba71f6/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.690434 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-qcx4k_a600298c-821b-47b5-9018-8d7ef15267c8/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.774225 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-27zdj_aa6e0f83-d561-4537-934c-54373200aa1f/manager/0.log" Jan 22 11:47:43 crc kubenswrapper[4975]: I0122 11:47:43.923957 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4jt9j_3bd8a1f0-8cb5-4051-95e0-5e6dac1205bb/manager/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.022692 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-s64bc_9c9f78cf-4a34-4dd9-9d0a-feb77ca25b62/manager/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.150069 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kcqc7_dc8e73d1-2925-43ef-a854-eb93ead085d5/manager/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.384744 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-9b49577dc-tm6xf_0c16cec1-1906-45c5-83be-5b14f7e44321/operator/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.488990 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rvw6h_8f4b0804-c42b-4aad-bb37-d55746bcffc6/registry-server/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.625300 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-wmzml_834b49cf-0565-4e20-b598-7d01ea144148/manager/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.836524 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-pspwb_177b52ae-55bc-40d1-badb-253927400560/manager/0.log" Jan 22 11:47:44 crc kubenswrapper[4975]: I0122 11:47:44.956691 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kqh4f_d7834e51-fb9e-4f97-a385-dd98869ece3b/operator/0.log" Jan 22 11:47:45 crc kubenswrapper[4975]: I0122 11:47:45.252646 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-x2kk6_ea739801-0b35-4cd3-a198-a58e93059f88/manager/0.log" Jan 22 11:47:45 crc kubenswrapper[4975]: I0122 11:47:45.371967 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-l8cz4_c6673acf-6a08-449e-b67a-e1245dbe2206/manager/0.log" Jan 22 11:47:45 crc kubenswrapper[4975]: I0122 11:47:45.436975 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-fpxsm_17acd27c-28fa-4f87-a3e3-c0c63adeeb9a/manager/0.log" Jan 22 11:47:45 crc kubenswrapper[4975]: I0122 11:47:45.447072 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-fcdd7b6c4-lq88q_2203164e-ced5-46bd-85c1-92eb94c7fb9f/manager/0.log" Jan 22 11:47:45 crc kubenswrapper[4975]: I0122 11:47:45.565993 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-ddrzg_88fd0de4-4508-450b-97b4-3206c50fb0a2/manager/0.log" Jan 22 11:48:07 crc kubenswrapper[4975]: I0122 11:48:07.405833 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d6l9z_5df06ca3-e772-4edf-8ccb-a4ca8bac5a7f/control-plane-machine-set-operator/0.log" Jan 22 11:48:07 crc kubenswrapper[4975]: I0122 11:48:07.569750 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prtg7_af2b3377-4d92-4b44-bee1-338c9156c095/kube-rbac-proxy/0.log" Jan 22 11:48:07 crc kubenswrapper[4975]: I0122 11:48:07.624050 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prtg7_af2b3377-4d92-4b44-bee1-338c9156c095/machine-api-operator/0.log" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.240129 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zt4vs"] Jan 22 11:48:17 crc kubenswrapper[4975]: E0122 11:48:17.241065 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" containerName="container-00" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.241079 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" containerName="container-00" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.241432 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d7a8ab0-2d29-46b6-9b72-4ac877c92db8" containerName="container-00" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.243095 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.262466 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt4vs"] Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.345053 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-catalog-content\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.345497 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-utilities\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.345543 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/3a4efe14-8594-41be-a265-ef19c7c4065b-kube-api-access-cq96k\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.446954 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-catalog-content\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.447441 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-catalog-content\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.447638 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-utilities\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.447765 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/3a4efe14-8594-41be-a265-ef19c7c4065b-kube-api-access-cq96k\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.447925 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-utilities\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.473877 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/3a4efe14-8594-41be-a265-ef19c7c4065b-kube-api-access-cq96k\") pod \"certified-operators-zt4vs\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:17 crc kubenswrapper[4975]: I0122 11:48:17.574935 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:18 crc kubenswrapper[4975]: I0122 11:48:18.131968 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt4vs"] Jan 22 11:48:18 crc kubenswrapper[4975]: I0122 11:48:18.890886 4975 generic.go:334] "Generic (PLEG): container finished" podID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerID="e9a6fc335b5a5a464c4fdcf9e0d38b4555d955eb767ac39e0ad5112a86b37c3d" exitCode=0 Jan 22 11:48:18 crc kubenswrapper[4975]: I0122 11:48:18.890952 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerDied","Data":"e9a6fc335b5a5a464c4fdcf9e0d38b4555d955eb767ac39e0ad5112a86b37c3d"} Jan 22 11:48:18 crc kubenswrapper[4975]: I0122 11:48:18.892109 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerStarted","Data":"9586f5f0c6844ae13f53b233b1e22c55aa6181ed021a339588ee5a16077112be"} Jan 22 11:48:18 crc kubenswrapper[4975]: I0122 11:48:18.893367 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:48:19 crc kubenswrapper[4975]: I0122 11:48:19.904396 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerStarted","Data":"11a48b95a236381bc954abebb9ed8060d4e2704b957d13254edb5b3ce22c7d28"} Jan 22 11:48:20 crc kubenswrapper[4975]: I0122 11:48:20.914665 4975 generic.go:334] "Generic (PLEG): container finished" podID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerID="11a48b95a236381bc954abebb9ed8060d4e2704b957d13254edb5b3ce22c7d28" exitCode=0 Jan 22 11:48:20 crc kubenswrapper[4975]: I0122 11:48:20.914937 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerDied","Data":"11a48b95a236381bc954abebb9ed8060d4e2704b957d13254edb5b3ce22c7d28"} Jan 22 11:48:21 crc kubenswrapper[4975]: I0122 11:48:21.943037 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerStarted","Data":"37075fdbd282f53e22b477cfc31d2b4744256ae9acdd2b0bd5f2433d26f15b7e"} Jan 22 11:48:21 crc kubenswrapper[4975]: I0122 11:48:21.965990 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zt4vs" podStartSLOduration=2.554563359 podStartE2EDuration="4.965968816s" podCreationTimestamp="2026-01-22 11:48:17 +0000 UTC" firstStartedPulling="2026-01-22 11:48:18.893108236 +0000 UTC m=+4291.551144346" lastFinishedPulling="2026-01-22 11:48:21.304513693 +0000 UTC m=+4293.962549803" observedRunningTime="2026-01-22 11:48:21.959187743 +0000 UTC m=+4294.617223873" watchObservedRunningTime="2026-01-22 11:48:21.965968816 +0000 UTC m=+4294.624004916" Jan 22 11:48:22 crc kubenswrapper[4975]: I0122 11:48:22.638940 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-l5qtb_961f5491-5692-4052-a7de-b05dcf68274b/cert-manager-controller/0.log" Jan 22 11:48:22 crc kubenswrapper[4975]: I0122 11:48:22.921790 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-4hxcx_dd1688d7-4085-4be3-9d16-3d212a30c5cb/cert-manager-cainjector/0.log" Jan 22 11:48:22 crc kubenswrapper[4975]: I0122 11:48:22.951575 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qlcb7_1c1a204e-dbad-4479-854d-98812e592f54/cert-manager-webhook/0.log" Jan 22 11:48:27 crc kubenswrapper[4975]: I0122 11:48:27.436361 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:48:27 crc kubenswrapper[4975]: I0122 11:48:27.437089 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:48:27 crc kubenswrapper[4975]: I0122 11:48:27.575462 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:27 crc kubenswrapper[4975]: I0122 11:48:27.575555 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:27 crc kubenswrapper[4975]: I0122 11:48:27.621421 4975 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:28 crc kubenswrapper[4975]: I0122 11:48:28.046301 4975 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:28 crc kubenswrapper[4975]: I0122 11:48:28.094442 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt4vs"] Jan 22 11:48:30 crc kubenswrapper[4975]: I0122 11:48:30.019190 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zt4vs" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="registry-server" containerID="cri-o://37075fdbd282f53e22b477cfc31d2b4744256ae9acdd2b0bd5f2433d26f15b7e" gracePeriod=2 Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.030344 4975 generic.go:334] "Generic (PLEG): container finished" podID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerID="37075fdbd282f53e22b477cfc31d2b4744256ae9acdd2b0bd5f2433d26f15b7e" exitCode=0 Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.030396 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerDied","Data":"37075fdbd282f53e22b477cfc31d2b4744256ae9acdd2b0bd5f2433d26f15b7e"} Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.030734 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt4vs" event={"ID":"3a4efe14-8594-41be-a265-ef19c7c4065b","Type":"ContainerDied","Data":"9586f5f0c6844ae13f53b233b1e22c55aa6181ed021a339588ee5a16077112be"} Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.030754 4975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9586f5f0c6844ae13f53b233b1e22c55aa6181ed021a339588ee5a16077112be" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.060086 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.218992 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-catalog-content\") pod \"3a4efe14-8594-41be-a265-ef19c7c4065b\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.219079 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-utilities\") pod \"3a4efe14-8594-41be-a265-ef19c7c4065b\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.219133 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/3a4efe14-8594-41be-a265-ef19c7c4065b-kube-api-access-cq96k\") pod \"3a4efe14-8594-41be-a265-ef19c7c4065b\" (UID: \"3a4efe14-8594-41be-a265-ef19c7c4065b\") " Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.219944 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-utilities" (OuterVolumeSpecName: "utilities") pod "3a4efe14-8594-41be-a265-ef19c7c4065b" (UID: "3a4efe14-8594-41be-a265-ef19c7c4065b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.224339 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4efe14-8594-41be-a265-ef19c7c4065b-kube-api-access-cq96k" (OuterVolumeSpecName: "kube-api-access-cq96k") pod "3a4efe14-8594-41be-a265-ef19c7c4065b" (UID: "3a4efe14-8594-41be-a265-ef19c7c4065b"). InnerVolumeSpecName "kube-api-access-cq96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.266675 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a4efe14-8594-41be-a265-ef19c7c4065b" (UID: "3a4efe14-8594-41be-a265-ef19c7c4065b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.321960 4975 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.322012 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/3a4efe14-8594-41be-a265-ef19c7c4065b-kube-api-access-cq96k\") on node \"crc\" DevicePath \"\"" Jan 22 11:48:31 crc kubenswrapper[4975]: I0122 11:48:31.322028 4975 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a4efe14-8594-41be-a265-ef19c7c4065b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:48:32 crc kubenswrapper[4975]: I0122 11:48:32.039825 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt4vs" Jan 22 11:48:32 crc kubenswrapper[4975]: I0122 11:48:32.066016 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt4vs"] Jan 22 11:48:32 crc kubenswrapper[4975]: I0122 11:48:32.081107 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zt4vs"] Jan 22 11:48:33 crc kubenswrapper[4975]: I0122 11:48:33.766271 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" path="/var/lib/kubelet/pods/3a4efe14-8594-41be-a265-ef19c7c4065b/volumes" Jan 22 11:48:35 crc kubenswrapper[4975]: I0122 11:48:35.224997 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7z2df_8f35ea3f-ed83-4201-a249-fc379ee0951c/nmstate-console-plugin/0.log" Jan 22 11:48:35 crc kubenswrapper[4975]: I0122 11:48:35.392975 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mjdwp_0f5b8079-630d-42c9-be23-abb29c55fcc8/nmstate-handler/0.log" Jan 22 11:48:35 crc kubenswrapper[4975]: I0122 11:48:35.426725 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hqp5c_7bcd90cd-87a3-4def-bff8-b31659a68fd2/kube-rbac-proxy/0.log" Jan 22 11:48:35 crc kubenswrapper[4975]: I0122 11:48:35.454313 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hqp5c_7bcd90cd-87a3-4def-bff8-b31659a68fd2/nmstate-metrics/0.log" Jan 22 11:48:35 crc kubenswrapper[4975]: I0122 11:48:35.630207 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-d8nq2_1d79f2dc-129c-4d57-94d5-3e2b73dad858/nmstate-operator/0.log" Jan 22 11:48:35 crc kubenswrapper[4975]: I0122 11:48:35.667811 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-r9mqr_d10d736d-0f74-4f33-bcbc-31d9a55f3370/nmstate-webhook/0.log" Jan 22 11:48:57 crc kubenswrapper[4975]: I0122 11:48:57.436673 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:48:57 crc kubenswrapper[4975]: I0122 11:48:57.437232 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.535582 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d2mbh_88d55612-c72e-4893-8ef2-a524fa613e7a/kube-rbac-proxy/0.log" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.573784 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d2mbh_88d55612-c72e-4893-8ef2-a524fa613e7a/controller/0.log" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.691947 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.852023 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.871238 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.871553 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:49:05 crc kubenswrapper[4975]: I0122 11:49:05.876916 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.104137 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.104702 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.127305 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.152394 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.286076 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-frr-files/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.311435 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-reloader/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.326156 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/controller/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.339111 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/cp-metrics/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.518244 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/kube-rbac-proxy/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.536023 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/kube-rbac-proxy-frr/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.546640 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/frr-metrics/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.718191 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/reloader/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.823110 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6tqxg_b0d54a39-8b95-459f-a047-244ed0e3a156/frr-k8s-webhook-server/0.log" Jan 22 11:49:06 crc kubenswrapper[4975]: I0122 11:49:06.975488 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b68b5dc66-l54nz_3c3466c9-e4db-41b6-90c7-132cf6c0dae7/manager/0.log" Jan 22 11:49:07 crc kubenswrapper[4975]: I0122 11:49:07.183275 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b6ddfffdc-46wrh_c6b7738a-4a9e-4f09-a3a0-5a413eed76b9/webhook-server/0.log" Jan 22 11:49:07 crc kubenswrapper[4975]: I0122 11:49:07.304698 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2knnf_1b770e2d-a830-4d27-8567-52246e202aa3/kube-rbac-proxy/0.log" Jan 22 11:49:07 crc kubenswrapper[4975]: I0122 11:49:07.811011 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2knnf_1b770e2d-a830-4d27-8567-52246e202aa3/speaker/0.log" Jan 22 11:49:07 crc kubenswrapper[4975]: I0122 11:49:07.912994 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vcvnb_f8d84cf3-8bf0-47bb-ae46-eedcc76c4892/frr/0.log" Jan 22 11:49:21 crc kubenswrapper[4975]: I0122 11:49:21.527811 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/util/0.log" Jan 22 11:49:21 crc kubenswrapper[4975]: I0122 11:49:21.795154 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/pull/0.log" Jan 22 11:49:21 crc kubenswrapper[4975]: I0122 11:49:21.867536 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/util/0.log" Jan 22 11:49:21 crc kubenswrapper[4975]: I0122 11:49:21.870778 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/pull/0.log" Jan 22 11:49:21 crc kubenswrapper[4975]: I0122 11:49:21.982418 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/util/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.036965 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/extract/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.112530 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9lhj2_ff442b38-0291-4b8a-a51e-01cc0397ea47/pull/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.177639 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/util/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.393950 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/util/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.436808 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/pull/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.448774 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/pull/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.616388 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/util/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.626939 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/pull/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.665242 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zsl9v_cd03090e-9416-416a-a3cc-81d567aba65c/extract/0.log" Jan 22 11:49:22 crc kubenswrapper[4975]: I0122 11:49:22.824979 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-utilities/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.030324 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-content/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.031266 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-utilities/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.079926 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-content/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.193349 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-utilities/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.233173 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/extract-content/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.402908 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-utilities/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.689719 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-utilities/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.703576 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfznn_97ce1bb8-c814-40c5-a416-a5f6f5e30d2d/registry-server/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.762219 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-content/0.log" Jan 22 11:49:23 crc kubenswrapper[4975]: I0122 11:49:23.775525 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-content/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.383495 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-content/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.389909 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/extract-utilities/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.588889 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pkfxd_6cc0ba15-cad5-416b-8428-6614ca1ebbec/marketplace-operator/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.637449 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-utilities/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.866546 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-utilities/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.902274 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-content/0.log" Jan 22 11:49:24 crc kubenswrapper[4975]: I0122 11:49:24.931196 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-content/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.030146 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9fpv2_1e426aac-ee27-4d77-b5ee-a16e9c105097/registry-server/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.139617 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-utilities/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.152691 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/extract-content/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.300560 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lkkwb_59934cb1-4316-4b56-aaa6-bd2eb9eaa394/registry-server/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.371404 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-utilities/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.934357 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-content/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.942580 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-utilities/0.log" Jan 22 11:49:25 crc kubenswrapper[4975]: I0122 11:49:25.960392 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-content/0.log" Jan 22 11:49:26 crc kubenswrapper[4975]: I0122 11:49:26.118429 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-utilities/0.log" Jan 22 11:49:26 crc kubenswrapper[4975]: I0122 11:49:26.120355 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/extract-content/0.log" Jan 22 11:49:26 crc kubenswrapper[4975]: I0122 11:49:26.619394 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhcx5_f3c75e99-22ed-439b-a7a3-e3d4163d7809/registry-server/0.log" Jan 22 11:49:27 crc kubenswrapper[4975]: I0122 11:49:27.436664 4975 patch_prober.go:28] interesting pod/machine-config-daemon-fmw72 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:49:27 crc kubenswrapper[4975]: I0122 11:49:27.436758 4975 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[4975]: I0122 11:49:27.436832 4975 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" Jan 22 11:49:27 crc kubenswrapper[4975]: I0122 11:49:27.437663 4975 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad"} pod="openshift-machine-config-operator/machine-config-daemon-fmw72" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:49:27 crc kubenswrapper[4975]: I0122 11:49:27.437719 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerName="machine-config-daemon" containerID="cri-o://c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" gracePeriod=600 Jan 22 11:49:27 crc kubenswrapper[4975]: E0122 11:49:27.565125 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:49:28 crc kubenswrapper[4975]: I0122 11:49:28.548082 4975 generic.go:334] "Generic (PLEG): container finished" podID="c717ed91-9c30-47bc-b814-ceb1b25939a8" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" exitCode=0 Jan 22 11:49:28 crc kubenswrapper[4975]: I0122 11:49:28.548138 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" event={"ID":"c717ed91-9c30-47bc-b814-ceb1b25939a8","Type":"ContainerDied","Data":"c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad"} Jan 22 11:49:28 crc kubenswrapper[4975]: I0122 11:49:28.548421 4975 scope.go:117] "RemoveContainer" containerID="05b554e15b11f29874876619dcf16d3adf05366b2e8a168deeb6362afa5a541c" Jan 22 11:49:28 crc kubenswrapper[4975]: I0122 11:49:28.549209 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:49:28 crc kubenswrapper[4975]: E0122 11:49:28.549573 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:49:40 crc kubenswrapper[4975]: I0122 11:49:40.755625 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:49:40 crc kubenswrapper[4975]: E0122 11:49:40.756720 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:49:45 crc kubenswrapper[4975]: E0122 11:49:45.234930 4975 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:33476->38.102.83.51:37165: write tcp 38.102.83.51:33476->38.102.83.51:37165: write: connection reset by peer Jan 22 11:49:51 crc kubenswrapper[4975]: I0122 11:49:51.754533 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:49:51 crc kubenswrapper[4975]: E0122 11:49:51.755224 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:50:06 crc kubenswrapper[4975]: I0122 11:50:06.756217 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:50:06 crc kubenswrapper[4975]: E0122 11:50:06.757263 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:50:19 crc kubenswrapper[4975]: I0122 11:50:19.755665 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:50:19 crc kubenswrapper[4975]: E0122 11:50:19.756411 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:50:33 crc kubenswrapper[4975]: I0122 11:50:33.755172 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:50:33 crc kubenswrapper[4975]: E0122 11:50:33.755964 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:50:47 crc kubenswrapper[4975]: I0122 11:50:47.766681 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:50:47 crc kubenswrapper[4975]: E0122 11:50:47.767770 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:51:01 crc kubenswrapper[4975]: I0122 11:51:01.756197 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:51:01 crc kubenswrapper[4975]: E0122 11:51:01.757089 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:51:11 crc kubenswrapper[4975]: I0122 11:51:11.559709 4975 generic.go:334] "Generic (PLEG): container finished" podID="18541382-fcdb-4c06-a929-3753b373d967" containerID="6862723075b7bd76fb9e8451ecb6c313874d5a39186b5c07d0838a64382c6429" exitCode=0 Jan 22 11:51:11 crc kubenswrapper[4975]: I0122 11:51:11.559785 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gc56m/must-gather-wnh2d" event={"ID":"18541382-fcdb-4c06-a929-3753b373d967","Type":"ContainerDied","Data":"6862723075b7bd76fb9e8451ecb6c313874d5a39186b5c07d0838a64382c6429"} Jan 22 11:51:11 crc kubenswrapper[4975]: I0122 11:51:11.562121 4975 scope.go:117] "RemoveContainer" containerID="6862723075b7bd76fb9e8451ecb6c313874d5a39186b5c07d0838a64382c6429" Jan 22 11:51:12 crc kubenswrapper[4975]: I0122 11:51:12.273264 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gc56m_must-gather-wnh2d_18541382-fcdb-4c06-a929-3753b373d967/gather/0.log" Jan 22 11:51:14 crc kubenswrapper[4975]: I0122 11:51:14.755468 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:51:14 crc kubenswrapper[4975]: E0122 11:51:14.756171 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.397921 4975 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gc56m/must-gather-wnh2d"] Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.399295 4975 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gc56m/must-gather-wnh2d" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="copy" containerID="cri-o://191bf25bb682da6f7daf33b9082fced0d375905d9620bfcc448b1c16716e2686" gracePeriod=2 Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.407374 4975 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gc56m/must-gather-wnh2d"] Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.720408 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gc56m_must-gather-wnh2d_18541382-fcdb-4c06-a929-3753b373d967/copy/0.log" Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.721129 4975 generic.go:334] "Generic (PLEG): container finished" podID="18541382-fcdb-4c06-a929-3753b373d967" containerID="191bf25bb682da6f7daf33b9082fced0d375905d9620bfcc448b1c16716e2686" exitCode=143 Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.925113 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gc56m_must-gather-wnh2d_18541382-fcdb-4c06-a929-3753b373d967/copy/0.log" Jan 22 11:51:23 crc kubenswrapper[4975]: I0122 11:51:23.925637 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.062006 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xrmq\" (UniqueName: \"kubernetes.io/projected/18541382-fcdb-4c06-a929-3753b373d967-kube-api-access-4xrmq\") pod \"18541382-fcdb-4c06-a929-3753b373d967\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.062285 4975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/18541382-fcdb-4c06-a929-3753b373d967-must-gather-output\") pod \"18541382-fcdb-4c06-a929-3753b373d967\" (UID: \"18541382-fcdb-4c06-a929-3753b373d967\") " Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.067337 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18541382-fcdb-4c06-a929-3753b373d967-kube-api-access-4xrmq" (OuterVolumeSpecName: "kube-api-access-4xrmq") pod "18541382-fcdb-4c06-a929-3753b373d967" (UID: "18541382-fcdb-4c06-a929-3753b373d967"). InnerVolumeSpecName "kube-api-access-4xrmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.164346 4975 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xrmq\" (UniqueName: \"kubernetes.io/projected/18541382-fcdb-4c06-a929-3753b373d967-kube-api-access-4xrmq\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.220879 4975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18541382-fcdb-4c06-a929-3753b373d967-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "18541382-fcdb-4c06-a929-3753b373d967" (UID: "18541382-fcdb-4c06-a929-3753b373d967"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.266635 4975 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/18541382-fcdb-4c06-a929-3753b373d967-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.743434 4975 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gc56m_must-gather-wnh2d_18541382-fcdb-4c06-a929-3753b373d967/copy/0.log" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.744120 4975 scope.go:117] "RemoveContainer" containerID="191bf25bb682da6f7daf33b9082fced0d375905d9620bfcc448b1c16716e2686" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.744450 4975 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gc56m/must-gather-wnh2d" Jan 22 11:51:24 crc kubenswrapper[4975]: I0122 11:51:24.773405 4975 scope.go:117] "RemoveContainer" containerID="6862723075b7bd76fb9e8451ecb6c313874d5a39186b5c07d0838a64382c6429" Jan 22 11:51:25 crc kubenswrapper[4975]: I0122 11:51:25.768769 4975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18541382-fcdb-4c06-a929-3753b373d967" path="/var/lib/kubelet/pods/18541382-fcdb-4c06-a929-3753b373d967/volumes" Jan 22 11:51:26 crc kubenswrapper[4975]: I0122 11:51:26.755620 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:51:26 crc kubenswrapper[4975]: E0122 11:51:26.756383 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:51:40 crc kubenswrapper[4975]: I0122 11:51:40.756028 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:51:40 crc kubenswrapper[4975]: E0122 11:51:40.757241 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:51:51 crc kubenswrapper[4975]: I0122 11:51:51.755188 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:51:51 crc kubenswrapper[4975]: E0122 11:51:51.756168 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:52:05 crc kubenswrapper[4975]: I0122 11:52:05.755212 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:52:05 crc kubenswrapper[4975]: E0122 11:52:05.756473 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:52:16 crc kubenswrapper[4975]: I0122 11:52:16.754739 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:52:16 crc kubenswrapper[4975]: E0122 11:52:16.755476 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:52:30 crc kubenswrapper[4975]: I0122 11:52:30.756491 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:52:30 crc kubenswrapper[4975]: E0122 11:52:30.757840 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:52:34 crc kubenswrapper[4975]: I0122 11:52:34.770733 4975 scope.go:117] "RemoveContainer" containerID="d513bdfa9126bd846559cf5ed3bae38ad2c3c012e01854b696b92bf5984e2c31" Jan 22 11:52:34 crc kubenswrapper[4975]: I0122 11:52:34.795165 4975 scope.go:117] "RemoveContainer" containerID="f86f39faa5b1e925deaa7c6f5e952853d599c5123b04c6d9014d71b126b38a80" Jan 22 11:52:43 crc kubenswrapper[4975]: I0122 11:52:43.754891 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:52:43 crc kubenswrapper[4975]: E0122 11:52:43.756222 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:52:56 crc kubenswrapper[4975]: I0122 11:52:56.755423 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:52:56 crc kubenswrapper[4975]: E0122 11:52:56.756260 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:53:07 crc kubenswrapper[4975]: I0122 11:53:07.766757 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:53:07 crc kubenswrapper[4975]: E0122 11:53:07.768392 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:53:19 crc kubenswrapper[4975]: I0122 11:53:19.755548 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:53:19 crc kubenswrapper[4975]: E0122 11:53:19.756580 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:53:32 crc kubenswrapper[4975]: I0122 11:53:32.757524 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:53:32 crc kubenswrapper[4975]: E0122 11:53:32.759004 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:53:45 crc kubenswrapper[4975]: I0122 11:53:45.756257 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:53:45 crc kubenswrapper[4975]: E0122 11:53:45.757867 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:53:56 crc kubenswrapper[4975]: I0122 11:53:56.754603 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:53:56 crc kubenswrapper[4975]: E0122 11:53:56.755643 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:54:10 crc kubenswrapper[4975]: I0122 11:54:10.755727 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:54:10 crc kubenswrapper[4975]: E0122 11:54:10.756572 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.104496 4975 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scsxs"] Jan 22 11:54:19 crc kubenswrapper[4975]: E0122 11:54:19.110279 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="gather" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.110373 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="gather" Jan 22 11:54:19 crc kubenswrapper[4975]: E0122 11:54:19.113641 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="extract-utilities" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.113714 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="extract-utilities" Jan 22 11:54:19 crc kubenswrapper[4975]: E0122 11:54:19.113763 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="registry-server" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.113779 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="registry-server" Jan 22 11:54:19 crc kubenswrapper[4975]: E0122 11:54:19.113901 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="copy" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.113946 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="copy" Jan 22 11:54:19 crc kubenswrapper[4975]: E0122 11:54:19.113978 4975 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="extract-content" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.114012 4975 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="extract-content" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.115213 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="gather" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.115275 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="18541382-fcdb-4c06-a929-3753b373d967" containerName="copy" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.115311 4975 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4efe14-8594-41be-a265-ef19c7c4065b" containerName="registry-server" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.120419 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.132805 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scsxs"] Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.156675 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jh9f\" (UniqueName: \"kubernetes.io/projected/8d468d43-9982-4b35-8d49-3989587f85fa-kube-api-access-8jh9f\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.156784 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d468d43-9982-4b35-8d49-3989587f85fa-catalog-content\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.156807 4975 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d468d43-9982-4b35-8d49-3989587f85fa-utilities\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.258124 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d468d43-9982-4b35-8d49-3989587f85fa-catalog-content\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.258181 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d468d43-9982-4b35-8d49-3989587f85fa-utilities\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.258278 4975 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jh9f\" (UniqueName: \"kubernetes.io/projected/8d468d43-9982-4b35-8d49-3989587f85fa-kube-api-access-8jh9f\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.259080 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d468d43-9982-4b35-8d49-3989587f85fa-catalog-content\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.259110 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d468d43-9982-4b35-8d49-3989587f85fa-utilities\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.287777 4975 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jh9f\" (UniqueName: \"kubernetes.io/projected/8d468d43-9982-4b35-8d49-3989587f85fa-kube-api-access-8jh9f\") pod \"redhat-operators-scsxs\" (UID: \"8d468d43-9982-4b35-8d49-3989587f85fa\") " pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.450384 4975 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scsxs" Jan 22 11:54:19 crc kubenswrapper[4975]: I0122 11:54:19.956684 4975 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scsxs"] Jan 22 11:54:20 crc kubenswrapper[4975]: I0122 11:54:20.561441 4975 generic.go:334] "Generic (PLEG): container finished" podID="8d468d43-9982-4b35-8d49-3989587f85fa" containerID="c417781010aede614043ead24c7061a07a548f0a2b0e97867c7ba3ccb5931599" exitCode=0 Jan 22 11:54:20 crc kubenswrapper[4975]: I0122 11:54:20.561493 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scsxs" event={"ID":"8d468d43-9982-4b35-8d49-3989587f85fa","Type":"ContainerDied","Data":"c417781010aede614043ead24c7061a07a548f0a2b0e97867c7ba3ccb5931599"} Jan 22 11:54:20 crc kubenswrapper[4975]: I0122 11:54:20.561756 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scsxs" event={"ID":"8d468d43-9982-4b35-8d49-3989587f85fa","Type":"ContainerStarted","Data":"28b4a0343594cec4d45368cf4053fcb11fbd30fc728f811de10626c7df2caad1"} Jan 22 11:54:20 crc kubenswrapper[4975]: I0122 11:54:20.563632 4975 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:54:22 crc kubenswrapper[4975]: I0122 11:54:22.593642 4975 generic.go:334] "Generic (PLEG): container finished" podID="8d468d43-9982-4b35-8d49-3989587f85fa" containerID="e33b5fcd238b6bed3f286fe1d3b6cca086f5559296b089a74dd3c07a181a8f1c" exitCode=0 Jan 22 11:54:22 crc kubenswrapper[4975]: I0122 11:54:22.593717 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scsxs" event={"ID":"8d468d43-9982-4b35-8d49-3989587f85fa","Type":"ContainerDied","Data":"e33b5fcd238b6bed3f286fe1d3b6cca086f5559296b089a74dd3c07a181a8f1c"} Jan 22 11:54:23 crc kubenswrapper[4975]: I0122 11:54:23.605006 4975 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scsxs" event={"ID":"8d468d43-9982-4b35-8d49-3989587f85fa","Type":"ContainerStarted","Data":"938e8cdff0d7c6338e2e88fc2d7238901143d5030ad036c2be3f2a10ecd0a51b"} Jan 22 11:54:23 crc kubenswrapper[4975]: I0122 11:54:23.629989 4975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scsxs" podStartSLOduration=2.207302507 podStartE2EDuration="4.629969579s" podCreationTimestamp="2026-01-22 11:54:19 +0000 UTC" firstStartedPulling="2026-01-22 11:54:20.563390014 +0000 UTC m=+4653.221426134" lastFinishedPulling="2026-01-22 11:54:22.986057106 +0000 UTC m=+4655.644093206" observedRunningTime="2026-01-22 11:54:23.623776941 +0000 UTC m=+4656.281813051" watchObservedRunningTime="2026-01-22 11:54:23.629969579 +0000 UTC m=+4656.288005689" Jan 22 11:54:23 crc kubenswrapper[4975]: I0122 11:54:23.754752 4975 scope.go:117] "RemoveContainer" containerID="c0288a4f96651da4f69674013dbaa62af3322faf37194ac6909abd4ff50e46ad" Jan 22 11:54:23 crc kubenswrapper[4975]: E0122 11:54:23.754998 4975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fmw72_openshift-machine-config-operator(c717ed91-9c30-47bc-b814-ceb1b25939a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fmw72" podUID="c717ed91-9c30-47bc-b814-ceb1b25939a8"